SecureAI's security architecture is designed to protect automotive aftermarket data at every layer — from the browser to the database to external AI model providers. This article explains the technical security controls, encryption standards, storage architecture, and data flow protections that secure your organization's information.
Architecture Overview
SecureAI runs on Google Cloud Platform (GCP) Cloud Run and follows a layered security model:
┌─────────────────────────────────────────────────┐
│ Users (Browser / API Clients) │
│ ── TLS 1.2+ ────────────────────────────── │
├─────────────────────────────────────────────────┤
│ Cloud Run Frontend (HTTPS-only) │
│ ── mTLS ─────────────────────────────────── │
├─────────────────────────────────────────────────┤
│ API Server (Authentication + Authorization) │
│ ── mTLS ─────────────────────────────────── │
├─────────────────────────────────────────────────┤
│ Data Layer │ Model Providers │
│ (Encrypted at rest) │ (TLS, no retention) │
└─────────────────────────────────────────────────┘
Each layer enforces its own security boundary. A compromise at one layer does not automatically grant access to another.
Encryption in Transit
All data moving between components is encrypted:
| Connection | Protocol | Minimum Version | Details |
|---|---|---|---|
| Browser → SecureAI | TLS | 1.2 | HTTP automatically redirected to HTTPS. HSTS headers enforced. |
| API clients → SecureAI | TLS | 1.2 | Bearer token authentication required for all API endpoints. |
| Frontend → API server | mTLS | 1.2 | Mutual TLS between internal services within the Cloud Run environment. |
| API server → Database | mTLS | 1.2 | Database connections authenticated and encrypted. |
| API server → Model providers | TLS | 1.2 | All outbound requests to AI model APIs use encrypted connections. |
| API server → Cloud Storage | TLS | 1.2 | Document upload and retrieval over encrypted channels. |
TLS Configuration
- Cipher suites: Only strong cipher suites are permitted. Weak ciphers (RC4, DES, 3DES) and protocols (SSLv3, TLS 1.0, TLS 1.1) are disabled.
- Certificate management: TLS certificates are managed automatically through GCP's certificate infrastructure. Certificates are rotated before expiration without manual intervention.
- HSTS: HTTP Strict Transport Security headers are set with a minimum max-age of one year, preventing protocol downgrade attacks.
Encryption at Rest
All stored data is encrypted using AES-256:
| Data Type | Storage Location | Encryption Method | Key Management |
|---|---|---|---|
| Conversation history | Cloud SQL (PostgreSQL) | AES-256 server-side | GCP-managed keys (default) or CMEK |
| Uploaded documents | Cloud Storage | AES-256 server-side | GCP-managed keys (default) or CMEK |
| User account data | Cloud SQL (PostgreSQL) | AES-256 server-side | GCP-managed keys (default) or CMEK |
| Audit logs | Cloud SQL (PostgreSQL) | AES-256 server-side | GCP-managed keys (default) or CMEK |
| Database backups | Cloud Storage | AES-256 server-side | Same key policy as source data |
| Temporary processing files | In-memory only | Not persisted to disk | N/A — cleared after request completes |
Customer-Managed Encryption Keys (CMEK)
Organizations with stricter key management requirements can use CMEK through GCP's Key Management Service (KMS):
- How CMEK works: Your organization controls the encryption keys used to protect your data. GCP KMS stores and manages the keys, but your organization controls their lifecycle — including creation, rotation, and revocation.
- Key rotation: Automatic key rotation is configured by default (every 90 days). Your organization can adjust the rotation period or trigger manual rotation through GCP KMS.
- Key revocation: Revoking your CMEK renders all data encrypted with that key permanently inaccessible. Use this as an emergency measure only.
- Availability: CMEK is available for enterprise service agreements. Contact your account representative during onboarding to configure CMEK.
Storage Architecture
Data Storage Locations
SecureAI stores data in three primary locations:
1. Cloud SQL (PostgreSQL)
- Stores conversation history, user accounts, system configuration, and audit logs.
- Runs in a high-availability configuration with automatic failover.
- Point-in-time recovery enabled, with backups retained according to your service agreement.
- Database access restricted to the API server — no direct external connections are permitted.
2. Cloud Storage
- Stores uploaded documents (PDFs, images, spreadsheets) and processed document embeddings.
- Objects are organized by organization ID, enforcing tenant isolation at the storage level.
- Lifecycle policies automatically delete objects according to your data retention configuration.
- Versioning is disabled by default to prevent unintended data retention. Deleted files are permanently removed.
3. In-Memory Processing
- AI model requests and responses are processed in memory and are not persisted to disk.
- Temporary data (file parsing buffers, embedding generation intermediates) exists only for the duration of the request.
- Cloud Run instances are ephemeral — when an instance scales down, all in-memory data is destroyed.
Data Residency
Data storage is region-specific:
| Region | GCP Location | Use Case |
|---|---|---|
| United States (default) | us-central1 (Iowa) | Default for all organizations |
| European Union | europe-west1 (Belgium) | Available for organizations with EU data residency requirements |
| Additional regions | Contact account representative | Available for enterprise agreements |
All stored data — conversations, documents, user accounts, audit logs, and backups — resides in the selected region. Changing regions after deployment requires a planned migration coordinated with your account representative.
Model Provider Data Handling
When SecureAI sends a prompt to an AI model provider, specific protections apply:
What Is Sent to Model Providers
- The user's prompt (question or instruction).
- Relevant conversation context (prior messages in the current conversation, subject to the model's context window).
- Retrieved document content when knowledge base or RAG features are used.
- System instructions configured by your organization's administrator.
What Is NOT Sent to Model Providers
- User account credentials or authentication tokens.
- Data from other users' conversations or other organizations.
- Audit log content.
- Administrative configuration details.
Provider Data Protection Guarantees
| Protection | Details |
|---|---|
| No training on your data | SecureAI's agreements with model providers prohibit using your input or output data for model training, fine-tuning, or improvement. |
| No data retention | Model providers are contractually required to delete your data after generating a response. No input or output is retained beyond the API request lifecycle. |
| Prompt isolation | Each API request is independent. Your prompts are not mixed with other users' or organizations' data. |
| Transport encryption | All communication with model providers uses TLS 1.2+. |
Supported Model Providers
SecureAI supports multiple model providers. Each provider's data handling is governed by SecureAI's data processing agreements:
- OpenAI: API requests processed under OpenAI's enterprise API terms (zero data retention).
- Anthropic: API requests processed under Anthropic's commercial API terms (zero data retention).
- Azure OpenAI: Requests processed within your organization's Azure tenant when configured (data stays in your Azure environment).
- Local models (Ollama, vLLM): When configured, prompts are processed on infrastructure you control. No data leaves your environment.
Your administrator selects which providers are available. See Adding Custom Model Providers for configuration details.
Network Security
Perimeter Controls
- Cloud Run ingress: Configured to accept traffic only from authorized sources. Internal services are not exposed to the public internet.
- IP allowlisting: Organizations can restrict access to specific IP ranges. See Setting Up IP Allowlisting for Enterprise Access.
- DDoS protection: GCP Cloud Armor provides automatic DDoS mitigation for all incoming traffic.
- WAF rules: Web Application Firewall rules block common attack patterns (SQL injection, XSS, path traversal) at the edge.
Internal Network
- All internal service communication uses mTLS.
- Database access is restricted to the API server via private networking (VPC). No public IP is assigned to the database.
- Secrets (API keys, database credentials, encryption keys) are stored in GCP Secret Manager, not in application code or environment variables.
Authentication and Authorization
Authentication
SecureAI supports multiple authentication methods:
| Method | Details |
|---|---|
| Local accounts | Email/password with optional MFA. Passwords are hashed using bcrypt with a minimum cost factor of 12. |
| SAML SSO | Federated authentication via your organization's identity provider (Okta, Azure AD, Auth0). |
| OIDC SSO | OpenID Connect-based authentication for compatible identity providers. |
| API tokens | Bearer tokens for programmatic access. Tokens are scoped to specific permissions and can be revoked by administrators. |
Authorization
- Role-based access control (RBAC): Users are assigned roles (User, Admin) that determine access levels.
- Tenant isolation: All authorization checks include the organization ID. A user in one organization cannot access another organization's resources regardless of role.
- API scope enforcement: API tokens are limited to the permissions granted at creation. Tokens cannot exceed the creating user's permissions.
Vulnerability Management
- Dependency scanning: Automated scanning of application dependencies for known vulnerabilities (CVEs). Critical vulnerabilities are patched within 48 hours of disclosure.
- Container image scanning: Cloud Run container images are scanned before deployment. Images with critical or high-severity vulnerabilities are blocked from deployment.
- Penetration testing: Independent security assessments are conducted regularly. Summary reports are available under NDA — contact your account representative.
- Incident response: Defined incident response procedures with documented escalation paths. Security incidents are communicated to affected organizations according to contractual SLAs and regulatory requirements.
Security Monitoring
- Audit logging: All administrative and security-relevant actions are logged with timestamps, user identity, source IP, and action details.
- Anomaly detection: Automated alerting on suspicious patterns — repeated authentication failures, unusual data access volumes, configuration changes outside business hours.
- Log retention: Security logs are retained independently of conversation data, typically for a longer period as defined in your service agreement.
- SIEM integration: Audit logs can be exported for integration with your organization's security information and event management (SIEM) tools.
Frequently Asked Questions
Is my data encrypted?
Yes. All data is encrypted both in transit (TLS 1.2+) and at rest (AES-256). Organizations requiring additional key control can use Customer-Managed Encryption Keys (CMEK).
Can AI model providers see my data?
Model providers process your prompts to generate responses, but they cannot retain, store, or use your data for training. This is enforced by SecureAI's data processing agreements with each provider.
Where is my data physically stored?
By default, in GCP's us-central1 region (United States). EU data residency (europe-west1, Belgium) is available on request. Additional regions are available for enterprise agreements.
Can I use my own encryption keys?
Yes. CMEK support is available for enterprise service agreements. Your organization controls key lifecycle through GCP's Key Management Service.
How does SecureAI prevent cross-tenant data access?
Tenant isolation is enforced at multiple layers — application-level authorization checks, database-level row isolation, and storage-level object organization by organization ID. All queries include organization scope, and there is no mechanism for cross-tenant access.
Does SecureAI store data on local devices?
No. All data is stored server-side in GCP. No conversation data, documents, or credentials are persisted on end-user devices.
Related Articles
- How SecureAI Handles Your Data
- Compliance Certifications — SOC 2, GDPR, HIPAA
- Setting Up IP Allowlisting for Enterprise Access
- Configuring Data Retention Policies
- Adding Custom Model Providers
- How to Configure SAML SSO
Questions
For security architecture questions, to request penetration test reports, or to discuss CMEK configuration, contact your account representative.