← Home

security-compliance

Articles (4)

FAQs (7)

Can we enforce MFA?

Yes. How you enforce multi-factor authentication depends on your authentication method.

SSO users (recommended approach)

If your organization uses SAML or OIDC single sign-on, enforce MFA at your identity provider (Okta, Azure AD, Auth0, etc.). This is the recommended approach because:

  • Your identity provider's MFA policies apply to all applications, not just SecureAI.
  • You get centralized control over MFA methods, enrollment, and recovery.
  • SecureAI respects the authentication assurance your identity provider establishes during the SSO handshake.

To avoid duplicate MFA prompts, disable SecureAI's built-in MFA for SSO users:

  1. Navigate to Admin Panel > Settings > Authentication.
  2. Enable Disable Local MFA for SSO.

This ensures users are only prompted for MFA once, at the identity provider level.

For SSO configuration details, see How to Configure SAML SSO or How to Configure OIDC SSO.

Local account users

For organizations using local (email/password) accounts, SecureAI supports time-based one-time passwords (TOTP) as a second factor:

  1. Navigate to Admin Panel > Settings > Authentication.
  2. Enable Require MFA for Local Accounts.

Once enabled, users without MFA configured are prompted to set it up at their next login. Users can enroll using any TOTP-compatible authenticator app (Google Authenticator, Authy, 1Password, etc.).

What MFA methods are supported?

Authentication method MFA enforcement Supported MFA types
SAML SSO At your identity provider Whatever your IdP supports (push, TOTP, FIDO2, etc.)
OIDC SSO At your identity provider Whatever your IdP supports
Local accounts In SecureAI admin settings TOTP (time-based one-time passwords)

Compliance considerations

Enforcing MFA is a common requirement for SOC 2, HIPAA, and other compliance frameworks. If your organization uses SSO, your identity provider's MFA enforcement satisfies this requirement for SecureAI access. For details on SecureAI's compliance posture, see Compliance and Certifications.

Does SecureAI support SOC 2, GDPR, or HIPAA?

Yes. SecureAI supports compliance with SOC 2, GDPR, and HIPAA. The specifics depend on which framework applies to your organization.

SOC 2 Type II

SecureAI's infrastructure runs on Google Cloud Platform (GCP), which maintains its own SOC 2 Type II certification. SecureAI maintains additional application-level controls covering access control (RBAC, SSO, MFA), encryption (AES-256 at rest, TLS 1.2+ in transit), audit logging, incident response, and change management.

To obtain SecureAI's SOC 2 Type II audit report, contact your account representative. The report is shared under NDA.

If your organization is undergoing its own SOC 2 audit and uses SecureAI as a subservice, reference SecureAI's report in your Complementary Subservice Organization Controls (CSOCs) section.

GDPR

SecureAI supports GDPR compliance through:

  • Data Processing Agreements (DPAs) -- available on request from your account representative.
  • Data subject rights -- administrators can export, rectify, and delete user data through the admin panel or API to fulfill access, erasure, and portability requests.
  • EU data residency -- data can be hosted in the europe-west1 (Belgium) region so that all stored data stays within the EU. This includes conversations, documents, user accounts, and audit logs.
  • Sub-processor transparency -- the full sub-processor list is included in the DPA.

HIPAA

SecureAI supports HIPAA compliance for organizations that handle protected health information (PHI):

  • Business Associate Agreements (BAAs) -- available on request. Contact your account representative to confirm applicability.
  • Technical safeguards -- unique user identification, role-based access, AES-256 encryption at rest with CMEK support, comprehensive audit logging, and TLS 1.2+ for all transmission.
  • Administrative responsibilities remain with your organization, including user access management, PHI handling training, and configuring retention policies that meet HIPAA minimums (typically 6 years for administrative records).

Note: Most automotive aftermarket organizations do not handle PHI through SecureAI. If you are unsure whether HIPAA applies to your use case, consult your compliance or legal team.

Can data stay in our region?

Yes. SecureAI supports data residency in multiple regions:

Region GCP Location Availability
United States us-central1 (Iowa) Default for all organizations
European Union europe-west1 (Belgium) Available on request
Additional regions Contact account representative Enterprise agreements

Data residency applies to all stored data -- conversations, uploaded documents, user accounts, audit logs, and backups. To change regions after initial deployment, contact your account representative; migration requires a planned maintenance window.

Note that AI model provider interactions may involve data transfer outside your selected region. These transfers and their safeguards are documented in the DPA. For full control, configure a local model provider (Ollama or vLLM) so prompts never leave your infrastructure.

How to prove compliance to your auditor

  1. Request SecureAI's SOC 2 Type II report from your account representative (NDA required).
  2. Obtain your executed DPA (for GDPR) or BAA (for HIPAA).
  3. Export your security configuration from Admin Panel > Settings > Export Configuration.
  4. Export audit logs covering the audit period from Admin Panel > Audit Logs > Export.

Related articles

How do we delete our data?

SecureAI gives administrators full control over data deletion. What you can delete and how depends on the type of data and your role.

What can be deleted

Data type Who can delete it How
Your own conversations You Profile settings or conversation list -- select conversations and delete
Any user's conversations Administrators Admin Panel > Conversations > select user > delete
Uploaded documents Administrators Admin Panel > Knowledge Bases > select document > delete
User accounts Administrators Admin Panel > Users > deactivate, then request full deletion
All organization data Organization owner Contact your account representative per your service agreement

Deletion is permanent and not recoverable. Once deleted, conversations, documents, and their associated metadata (including vector embeddings for documents) are permanently removed.

Deleting conversations

As a user: Open your conversation list, select the conversations you want to remove, and click Delete. This removes the conversation and all its messages from SecureAI.

As an administrator: Navigate to Admin Panel > Conversations. You can filter by user, date range, or keyword. Select conversations and delete them individually or in bulk.

Deleting uploaded documents

Administrators can remove documents from knowledge bases through Admin Panel > Knowledge Bases. When a document is deleted:

  • The file is removed from Cloud Storage.
  • Its vector embeddings are deleted from the search index.
  • Future AI responses will no longer reference the document's content.

Deleting user data

When a user leaves your organization:

  1. Deactivate the account -- Admin Panel > Users > select user > Deactivate. The user can no longer log in, but their data remains accessible for audit purposes.
  2. Request full deletion -- After deactivation, select Delete All User Data to permanently remove the user's conversations, uploads, and account information.

If you need to fulfill a GDPR erasure request (right to be forgotten), the full deletion option satisfies this requirement.

Automatic deletion via retention policies

Instead of deleting data manually, administrators can configure retention policies that automatically delete data after a specified period:

  1. Navigate to Admin Panel > Settings > Data Retention.
  2. Set retention periods for conversation history (e.g., 30, 90, or 365 days).
  3. Data older than the retention period is permanently deleted on a rolling basis.

For setup details, see Configuring Data Retention Policies.

Organization-level data deletion

When a service agreement ends, all data associated with your organization -- conversations, documents, user accounts, and audit logs -- is permanently deleted within the timeframe specified in your agreement (typically 30 days after termination). Contact your account representative to initiate this process.

How do we audit activity?

SecureAI logs every security-relevant action. Administrators can review these logs to track who did what and when.

What is logged

Event category Examples
Authentication Logins, logouts, failed login attempts, SSO events
User management Account creation, deactivation, role changes
Data access Document uploads, document deletions, conversation exports
Configuration SSO changes, retention policy changes, API token creation and revocation
Administrative Admin data access, bulk operations, system setting changes

Viewing and exporting audit logs

  1. Navigate to Admin Panel > Audit Logs.
  2. Filter by date range, user, or event type.
  3. Click Export to download logs in standard formats for integration with your SIEM or compliance tools.

Audit logs are retained independently of conversation data. The retention period is defined in your service agreement and is typically longer than conversation retention.

For step-by-step instructions, see How to Audit User Activity.

What about AI model providers?

When you delete data from SecureAI, there is nothing to delete at the model provider. AI model providers (OpenAI, Anthropic, Azure OpenAI) do not retain your prompts or responses beyond the API request lifecycle. This is contractually enforced.

If your organization uses local models (Ollama or vLLM), prompts never leave your infrastructure in the first place.

Related articles

How is data encrypted in SecureAI?

SecureAI encrypts all data both in transit and at rest. No data is stored or transmitted in plaintext.

Encryption in transit

All network communication uses TLS 1.2 or higher. This applies to:

  • Browser to SecureAI -- all user traffic is encrypted via HTTPS. HTTP connections are automatically redirected to HTTPS.
  • SecureAI to AI model providers -- API calls to upstream model providers (OpenAI, Anthropic, Google, etc.) use TLS-encrypted connections.
  • Internal service communication -- traffic between SecureAI's internal services within Google Cloud Platform uses GCP's default encryption in transit.

TLS certificates are managed automatically and rotated before expiration. There is nothing you need to configure for encryption in transit.

Encryption at rest

All stored data is encrypted at rest using AES-256, the industry standard for symmetric encryption. This covers:

  • Conversations and chat history -- all messages between users and AI models.
  • Uploaded documents -- files uploaded for use with RAG (retrieval-augmented generation) or as chat attachments.
  • User accounts and profiles -- usernames, email addresses, roles, and preferences.
  • Audit logs -- all recorded user and admin activity.
  • Backups -- database and file backups are encrypted with the same standard.

Default encryption

By default, SecureAI uses Google Cloud Platform's built-in encryption at rest. GCP automatically encrypts all data before it is written to disk using Google-managed encryption keys. No configuration is required.

Customer-managed encryption keys (CMEK)

For organizations that require control over their own encryption keys, SecureAI supports Customer-Managed Encryption Keys (CMEK) through Google Cloud KMS.

With CMEK enabled:

  • You create and manage your encryption keys in Google Cloud KMS.
  • SecureAI uses your keys to encrypt and decrypt data.
  • You can rotate, disable, or revoke keys at any time.
  • Revoking a key makes the associated data permanently inaccessible.

To enable CMEK, contact your account representative. CMEK is available on Enterprise plans.

Where is data stored?

SecureAI runs on Google Cloud Platform. Your data is stored in the GCP region assigned to your organization:

Region GCP Location Availability
United States us-central1 (Iowa) Default for all organizations
European Union europe-west1 (Belgium) Available on request
Additional regions Contact account representative Enterprise agreements

All data -- conversations, documents, user accounts, audit logs, and backups -- stays within your assigned region. To change regions after deployment, contact your account representative; migration requires a planned maintenance window.

For details on regional data handling and cross-border transfers, see your Data Processing Agreement (DPA).

What about data sent to AI model providers?

When a user sends a message, the prompt is transmitted to the configured AI model provider over a TLS-encrypted connection. Each provider has its own data handling policies:

  • Cloud-hosted providers (OpenAI, Anthropic, Google) -- SecureAI's enterprise agreements with these providers ensure that your prompts are not used for model training. Data retention by providers is governed by SecureAI's enterprise API agreements, not consumer terms.
  • Self-hosted models (Ollama, vLLM) -- if your organization runs a local model provider, prompts never leave your infrastructure. This gives you full control over data residency and eliminates third-party data exposure.

To configure a local model provider, see Adding Custom Model Providers.

How to verify your encryption configuration

  1. Go to Admin Panel > Settings > Security to view your current encryption and data residency settings.
  2. Export your security configuration from Admin Panel > Settings > Export Configuration for compliance documentation.
  3. Request SecureAI's SOC 2 Type II report from your account representative for independent verification of encryption controls.

Related articles

Is my data used to train AI models?

No. Your prompts, conversations, and uploaded documents are never used to train AI models. This applies to both SecureAI's platform and the upstream AI model providers it connects to.

SecureAI does not train on your data

SecureAI is a platform that routes your requests to AI model providers. SecureAI does not build or train its own large language models. Your data is used only to generate responses to your queries and, if configured, to power your organization's RAG (retrieval-augmented generation) knowledge bases.

SecureAI stores your conversations and uploaded documents solely for the features you use -- chat history, search, audit logging, and knowledge base retrieval. This data is never aggregated, anonymized, or otherwise repurposed for model development.

AI model providers do not train on your data

SecureAI connects to model providers (OpenAI, Anthropic, Google, Azure OpenAI) through enterprise API agreements, not consumer accounts. Under these agreements:

  • Prompts and responses are not used for training. Enterprise API terms explicitly prohibit using customer inputs and outputs to train, improve, or fine-tune models.
  • Data is not retained beyond the API request. Providers process your prompt, return a response, and discard the data. There is no persistent storage of your queries on the provider side.
  • Zero-data-retention (ZDR) options are available with select providers for organizations that require contractual guarantees of no data logging at the provider level.

These protections apply automatically to all SecureAI users. No configuration is required.

What if we use local models?

If your organization runs local model providers (such as Ollama or vLLM), your prompts never leave your infrastructure. There is no third-party data exposure of any kind. Local models give you complete control over data residency and eliminate any concern about external training.

To set up a local model provider, see Adding Custom Model Providers.

Does SecureAI store my prompts?

Yes, SecureAI stores your conversations so that you can access your chat history, and so administrators can review activity through audit logs. This storage is:

  • Encrypted at rest using AES-256 (see How is data encrypted in SecureAI?).
  • Retained according to your organization's policies -- administrators can configure automatic deletion after a set period (see Configuring Data Retention Policies).
  • Deletable on demand -- users can delete their own conversations, and administrators can delete any user's data (see How do we delete our data?).
  • Confined to your assigned region -- data stays within the GCP region assigned to your organization.

Stored prompts are never shared with other organizations, used for analytics, or made available to SecureAI employees except when required for technical support with your explicit authorization.

How to verify these protections

  1. Review SecureAI's Data Processing Agreement (DPA), which contractually binds these commitments. Request a copy from your account representative.
  2. Request SecureAI's SOC 2 Type II report for independent verification of data handling controls.
  3. Review the enterprise API agreements with each model provider by contacting your account representative.
  4. Check Admin Panel > Settings > Security to see your current data handling and model provider configuration.

Related articles

What data does SecureAI store?

SecureAI stores only the data necessary to provide the service -- your conversations, uploaded documents, account information, and audit logs. SecureAI does not store payment card numbers, does not retain data from AI model providers, and does not collect data beyond what you provide through normal use.

Data that SecureAI stores

Conversations and chat history

Every message you send and every AI response is stored so you can return to past conversations. This includes:

  • User prompts -- the messages you type or paste into the chat interface.
  • AI responses -- the model's replies, including any generated text, code, or structured output.
  • Conversation metadata -- timestamps, which model was used, token counts, and conversation titles.

Conversations are stored in SecureAI's database within your assigned GCP region. Administrators can configure retention policies to automatically delete conversations after a defined period.

Uploaded documents

Files you upload to knowledge bases for RAG (retrieval-augmented generation) are stored along with their vector embeddings:

  • Original files -- PDFs, Word documents, text files, and other supported formats are stored in Cloud Storage.
  • Vector embeddings -- numerical representations of document content used for semantic search. These are stored in SecureAI's search index.
  • Document metadata -- file names, upload dates, file sizes, and which knowledge base a document belongs to.

User accounts and profiles

SecureAI stores the information needed to identify and authenticate users:

  • Identity information -- name, email address, and profile picture (if provided).
  • Role and permissions -- whether the user is an admin, a standard user, or has custom role assignments.
  • Preferences -- display settings, default model selection, and notification preferences.
  • Authentication records -- hashed passwords (for local accounts) or SSO provider identifiers. Passwords are never stored in plaintext.

Audit logs

SecureAI records security-relevant events for compliance and accountability:

  • Authentication events -- logins, logouts, failed login attempts, MFA events.
  • Administrative actions -- user management, configuration changes, data access by admins.
  • Data lifecycle events -- document uploads, document deletions, conversation exports.

Audit logs are retained independently of other data and typically have a longer retention period defined in your service agreement.

System configuration

Administrative settings are stored so your SecureAI instance maintains its configuration:

  • SSO and identity provider settings.
  • Content filter rules and safety configurations.
  • Model provider connections and rate limits.
  • Data retention policy settings.

Data that SecureAI does NOT store

Payment card numbers and billing details

SecureAI does not process or store credit card numbers, bank account details, or other payment instruments. Billing is handled through invoicing and your organization's procurement process -- not through a self-service payment form.

AI model provider data

When you send a message, the prompt is transmitted to the configured AI model provider (OpenAI, Anthropic, Google, etc.) for processing. SecureAI stores your prompt and the response, but the model provider does not retain your data beyond the API request lifecycle. Under SecureAI's enterprise API agreements, providers do not use your prompts for model training.

If your organization uses self-hosted models (Ollama, vLLM), prompts never leave your infrastructure.

Browser or device telemetry

SecureAI does not install tracking pixels, fingerprint your browser, or collect device-level telemetry. The application does not use third-party analytics services that track individual user behavior across sites.

Data from other applications

SecureAI does not access or ingest data from your email, calendar, file storage, or other business applications unless you explicitly connect an integration and an administrator approves it. Integrations only access the specific data sources you configure.

Conversation content from other users

Standard users can only see their own conversations. Administrators can access other users' conversations through the Admin Panel, but this access is logged in the audit trail. There is no cross-user data sharing unless an administrator explicitly enables a shared knowledge base.

How long is data retained?

Data retention depends on your organization's configuration and service agreement:

Data type Default retention Configurable?
Conversations Indefinite (until deleted) Yes -- Admin Panel > Settings > Data Retention
Uploaded documents Indefinite (until deleted) Yes -- administrators can delete at any time
User accounts Until deactivated and deleted Yes -- administrators manage lifecycle
Audit logs Per service agreement (typically 1-2 years) Contact account representative
Backups 30 days rolling Per service agreement

After a service agreement ends, all organization data is permanently deleted within the timeframe specified in the agreement (typically 30 days).

How to review what is stored

  1. Your own data -- View your conversations and uploads in the SecureAI interface. You can delete your own conversations at any time.
  2. Organization data -- Administrators can review stored data through the Admin Panel: conversations, documents, user accounts, and audit logs.
  3. Data inventory -- Request a data inventory from your account representative for compliance documentation (GDPR Article 30 records of processing).

Related articles

What does SecureAI mean by secure AI chat?

"Secure AI chat" means that every layer of the system -- from how your data is transmitted, to how it is stored, to who can access it -- is designed to protect your organization's information. SecureAI treats security as a foundational requirement, not an add-on.

Data never leaves your control

SecureAI ensures your conversations and documents stay under your organization's control:

  • Encryption in transit -- all traffic between your browser, SecureAI, and AI model providers uses TLS 1.2 or higher. No data is transmitted in plaintext.
  • Encryption at rest -- all stored data (conversations, uploaded documents, user accounts, audit logs) is encrypted using AES-256.
  • No training on your data -- SecureAI's enterprise API agreements with model providers (OpenAI, Anthropic, Google) explicitly prohibit using your prompts or responses for model training.
  • Self-hosted model option -- organizations can run models locally via Ollama or vLLM, keeping all data on their own infrastructure with zero third-party exposure.

For encryption details, see How is data encrypted in SecureAI?.

Access is controlled and auditable

SecureAI provides enterprise-grade access controls so the right people see the right data:

  • Single sign-on (SSO) -- authenticate users through your existing identity provider (Google, Microsoft, Okta, Auth0) instead of managing separate credentials.
  • Multi-factor authentication (MFA) -- require a second factor for all users or specific roles.
  • Role-based access control (RBAC) -- assign users to roles (admin, user, viewer) that determine what they can access and configure.
  • Conversation privacy -- user conversations are private by default. Admins can only access them through a controlled process that is recorded in the audit log.

For SSO setup, see Does SecureAI support Google, Microsoft, Okta, or Auth0 login?.

AI behavior is governed

Secure AI chat is not just about protecting data -- it also means controlling what the AI can say and do:

  • Content filtering -- evaluate prompts and responses against configurable safety rules before they reach users. Block harmful content, PII exposure, or industry-specific terms.
  • Prompt injection protection -- detect and block attempts to override system instructions or bypass safety controls.
  • System prompt guardrails -- enforce organization-wide instructions that restrict the AI to your business domain and require appropriate disclaimers.
  • Rate limiting -- prevent individual users from consuming excessive resources or generating high volumes of unreviewed content.

For safety configuration, see How do we manage AI safety?.

Everything is logged

Every security-relevant action in SecureAI is recorded in an immutable audit trail:

  • User logins and authentication events
  • Conversation access (including admin overrides)
  • Content filter matches and prompt injection detections
  • Admin configuration changes
  • Data export and deletion requests

Audit logs can be exported as CSV for compliance reporting (SOC 2, GDPR, HIPAA). See How do we delete our data? for data lifecycle details.

Compliance frameworks

SecureAI's security controls are designed to support common compliance requirements:

  • SOC 2 Type II -- independently audited controls for security, availability, and confidentiality.
  • GDPR -- data residency options, data export, right-to-erasure support, and Data Processing Agreements.
  • HIPAA -- available under Business Associate Agreements for healthcare organizations.

For full compliance details, see Does SecureAI support SOC 2, GDPR, or HIPAA?.

Summary

When SecureAI says "secure AI chat," it means:

Layer What it covers
Data protection End-to-end encryption, no model training on your data, optional self-hosted models
Access control SSO, MFA, RBAC, private conversations with audited admin access
AI governance Content filtering, prompt injection protection, system prompt guardrails
Auditability Immutable audit logs for all security events, exportable for compliance
Compliance SOC 2, GDPR, HIPAA support with independent verification

Related articles