Frequently Asked Questions
50 questions
billing-plans
Can I add more users?
Yes. Administrators can add users at any time from the Admin Panel under Settings > Users > Add User. Additional seats are prorated for the remainder of the current billing cycle, so you only pay for the time the new user is active.
How prorating works
If you add a user halfway through your billing cycle, you are charged 50% of the per-seat rate for that cycle. The full per-seat rate applies starting the next billing cycle.
Is there a seat limit?
Some organizations have a seat cap defined in their service agreement. If your organization has a cap, you will see the limit displayed in the Admin Panel under Settings > Billing. To increase your seat allocation, contact your account representative.
Bulk user additions
For adding many users at once, see the bulk user import feature. Administrators can upload a CSV file or connect to an LDAP directory to provision multiple accounts simultaneously.
Can I belong to multiple workspaces?
Yes. Users can belong to more than one workspace at the same time. Your administrator assigns you to workspaces based on your role, department, or the resources you need. You can switch between your assigned workspaces at any time from the sidebar.
Switching workspaces
- Open the sidebar on the left side of the screen.
- Click the workspace name or workspace icon near the top.
- Select the workspace you want to switch to from the list of workspaces available to you.
When you switch workspaces, the available models and knowledge bases change to match that workspace's configuration. Your personal conversation history is preserved across all workspaces.
What changes between workspaces
Each workspace has its own:
- Models — the AI models available to members. A parts department workspace might offer models tuned for parts lookup, while an estimating workspace has models for repair procedures.
- Knowledge bases — document collections the models can search. Different workspaces can reference different catalogs, manuals, or internal documents.
Common examples
- A parts counter technician might belong to a Parts Counter workspace with aftermarket catalogs and a General workspace for ad-hoc research.
- A service advisor might belong to an Estimating workspace for collision repair procedures and a Parts Counter workspace to check part availability.
How to get access to another workspace
Workspace membership is managed by your administrator. If you need access to a workspace you are not currently assigned to, contact your administrator and request to be added.
Does belonging to multiple workspaces affect billing?
No. Billing is based on user seats, not workspace membership. A single user who belongs to three workspaces still counts as one seat.
Can I get an invoice?
Invoices are automatically sent to the billing contact on file at the beginning of each billing cycle. Each invoice includes a breakdown of active seats, per-seat rate, prorated charges for any mid-cycle additions, and the total amount due.
Viewing past invoices
Administrators can view and download past invoices in the Admin Panel under Settings > Billing > Invoice History.
Updating billing contacts
To change who receives invoices, contact your account representative with the new billing contact's name and email address. Changes take effect on the next billing cycle.
Requesting past invoices
If you need a copy of a past invoice that is not available in the Admin Panel, contact your account representative with the billing period you need. Invoices are typically available for the past 12 months.
How do I cancel a user's access?
Administrators can deactivate users from the Admin Panel under Settings > Users. Find the user, click their profile, and select Deactivate.
What deactivation does
- The user can no longer log in to SecureAI.
- Their conversation history and uploaded documents are preserved (subject to your organization's data retention policy).
- The seat is freed and does not count toward your seat total as of the next billing cycle.
Deactivation vs. deletion
- Deactivation preserves the user's data and can be reversed. Use this when someone leaves a role temporarily or moves to a different team.
- Deletion permanently removes the user account and all associated data. This action cannot be undone. Use this only when required by your data retention policies.
When does the billing change take effect?
Deactivated users stop counting toward your seat total at the start of the next billing cycle. If you deactivate a user mid-cycle, the seat is still billed for the remainder of that cycle.
How is SecureAI billed?
SecureAI is billed on a per-seat, per-month basis. Each active user account counts as one seat. Billing is calculated at the start of each monthly cycle based on the number of active seats at that time.
What counts as a seat?
Any user account with an "active" status in the Admin Panel counts as one seat. Deactivated or suspended accounts do not count toward your seat total.
Where can I see my current seat usage?
Administrators can view current seat usage in the Admin Panel under Settings > Billing. This shows the number of active seats, your plan's seat allocation, and the next billing date.
How do I get pricing details?
Pricing varies by organization based on volume, contract terms, and any negotiated discounts. Contact your account representative for pricing details specific to your organization.
I can't log in. What should I do?
If you are unable to log in to SecureAI, work through these steps in order:
1. Verify your URL
Make sure you are using the correct URL for your organization's SecureAI instance. Your organization may have a custom subdomain (e.g., yourcompany.secureai.example.com). Check with your administrator if you are unsure.
2. Reset your password
Click the Forgot Password link on the login page. A password reset email will be sent to the address associated with your account. Check your spam or junk folder if you do not see the email within a few minutes.
3. Confirm your account is active
Your administrator may have deactivated your account. Ask your administrator to check your account status in the Admin Panel under Settings > Users.
4. Check for SSO issues
If your organization uses Single Sign-On (SSO), the login issue may be on your identity provider's side. Try logging into other SSO-connected applications to see if the problem is specific to SecureAI.
5. Contact support
If none of the above steps resolve the issue, contact support with:
- Your email address
- Your organization name
- A description of the error message you see (or a screenshot)
- The browser and device you are using
Is there a free trial?
Trial availability varies by organization and is determined during the sales process. Some organizations receive a time-limited evaluation period with a set number of seats before committing to a paid plan.
What is included in a trial?
Trials typically include full access to SecureAI features, including all AI models, file upload capabilities, and conversation history. Seat counts and duration are defined in the trial agreement.
What happens when the trial ends?
When a trial period expires, user access is suspended until a paid plan is activated. Conversation history and uploaded documents from the trial period are preserved for 30 days after trial expiration to allow a smooth transition.
How do I request a trial?
Contact your account representative to discuss trial options for your organization.
What happens if I exceed my seat limit?
If your organization has a seat cap defined in your service agreement, you will receive a warning notification when approaching the limit. The Admin Panel displays your current usage relative to the cap under Settings > Billing.
Warning thresholds
- 80% of cap: An informational notice appears in the Admin Panel.
- 90% of cap: Email notifications are sent to the billing contact on file.
- At cap: New user creation is blocked until seats are freed or the cap is increased.
How to increase your seat allocation
Contact your account representative to adjust your seat cap. Increases can typically be processed within one business day. Any additional seats are prorated for the current billing cycle.
Freeing up seats
If you need to make room without increasing your cap, deactivate users who no longer need access. Deactivated users do not count toward your seat total as of the next billing cycle.
What plans does SecureAI offer?
SecureAI offers tiered plans designed for automotive aftermarket organizations of different sizes. All plans include the core SecureAI AI assistant platform built on OpenWebUI, with differences in seat counts, available AI models, storage, and support levels.
Plan tiers
| Feature | Starter | Professional | Enterprise |
|---|---|---|---|
| Seats | Up to 10 | Up to 100 | Unlimited |
| AI models | Standard models | Standard + advanced models | All models including custom |
| File uploads | 1 GB per user | 5 GB per user | Custom allocation |
| Knowledge bases | 3 | Unlimited | Unlimited |
| Conversation history | 90 days | 1 year | Unlimited retention |
| Support | Email + priority chat | Dedicated account manager | |
| SSO / SAML | Not included | Included | Included |
| Custom integrations | Not included | Available as add-on | Included |
How is pricing calculated?
All plans are billed on a per-seat, per-month basis. Each active user account counts as one seat. Pricing varies based on the plan tier, the number of seats, contract length, and any negotiated volume discounts. Annual billing is available at a discounted rate compared to monthly billing.
What is included in every plan?
Regardless of tier, every SecureAI plan includes:
- Secure, private AI access -- conversations are never used to train AI models
- Organization-scoped data isolation -- your data stays within your environment
- Conversation history and search -- find and reference past interactions
- File upload and analysis -- upload documents for AI-assisted review
- Admin Panel access -- manage users, roles, and workspace settings
- Automotive-specific assistants -- pre-configured assistants for parts lookup, diagnostics, and service workflows
How do I upgrade or change my plan?
Contact your account representative to discuss plan changes. Upgrades can typically be processed within one business day and are prorated for the current billing cycle. Downgrades take effect at the start of the next billing cycle.
How do I get a quote?
Contact your account representative for pricing details specific to your organization. Volume discounts are available for larger deployments.
search-and-history
Can I search and export past chats?
Yes. SecureAI lets you search through your past conversations and export them for record-keeping.
Searching past chats
Use the search bar at the top of the sidebar to find previous conversations by keyword. The search looks through conversation titles and message content, so you can find chats even if you don't remember the exact title.
Tips for finding what you need:
- Search by part number -- type an OEM or aftermarket part number to find every conversation that referenced it.
- Search by topic -- use keywords like "brake pads" or "torque specs" to narrow results.
- Use tags -- if you've tagged conversations (e.g., by customer or vehicle), you can filter by tag in the sidebar. See "Organizing conversations with tags" for details.
Results appear in the sidebar as a filtered conversation list. Click any result to reopen that conversation with its full message history.
Exporting conversations
To export a conversation:
- Open the conversation you want to export.
- Click the three-dot menu at the top of the chat area.
- Select Export.
- Choose your format: Markdown, PDF, or Plain Text.
- The file downloads to your browser's default download location.
Each export includes the full conversation thread -- your prompts, the AI responses, and any file references.
Exporting multiple conversations
There is no built-in bulk export feature. To export multiple conversations, repeat the single-conversation export for each one. If you need a bulk export for compliance or auditing purposes, contact your administrator -- they can use the Admin Panel to export conversation data across users.
Common use cases
- Fleet management records -- export diagnostic conversations to attach to vehicle service records.
- Audit trail -- save conversations that informed parts ordering decisions for internal review.
- Knowledge sharing -- export a helpful conversation and share the file with a colleague who has a similar question.
assistants-knowledge-bases
Can I update or version documents?
Yes. You can replace documents in a knowledge base at any time by removing the old file and uploading the new one. SecureAI does not maintain automatic version history for uploaded documents, so you manage versions by controlling which files are in the knowledge base.
How to replace a document
- Navigate to Knowledge in the left sidebar.
- Open the knowledge base containing the document.
- Remove the outdated document (click the delete icon next to the file).
- Upload the replacement document.
- Wait for indexing to complete.
The AI immediately starts using the new document for future queries. No re-attachment to conversations is required -- if the knowledge base is already attached, the updated content is used automatically.
Why you should replace rather than keep both versions
If you upload a new version alongside the old one, the AI may retrieve conflicting information. For example, a 2024 parts catalog and a 2025 catalog for the same product line could return different part numbers for the same vehicle. Always remove the outdated version before uploading the replacement.
Tips for managing document versions
- Name files with version or date indicators: Use names like
Brake-Components-Catalog-2025.pdfrather thancatalog.pdf. This makes it clear which version is currently loaded. - Test after replacing: Ask a question you know the answer to from the new document. Verify the AI cites the updated source.
- Keep originals outside SecureAI: Store prior versions on your local drive or shared network folder for reference. SecureAI does not archive removed documents.
Related articles
- How to Create and Manage Knowledge Bases -- full guide to knowledge base setup and management
- Knowledge Base Design Best Practices -- document organization and freshness management
Can assistants call APIs and use tools?
Yes. SecureAI assistants can call external APIs and use custom tools during conversations, giving them access to live data like parts inventory, pricing, shop management systems, and more.
How it works
An administrator creates a custom tool -- a Python function that calls an external API or service. The tool is then assigned to one or more assistants. When a user asks a question that needs external data, the assistant automatically decides whether to call the tool, runs it, and incorporates the results into its response.
For example, a "Parts Inventory Lookup" tool could query your parts database and return current stock levels and pricing when a user asks "Do we have part number 12345 in stock?"
What assistants can do with tools
| Capability | Example |
|---|---|
| Query external APIs | Check live parts inventory or pricing |
| Look up internal databases | Search shop management system records |
| Call third-party services | Pull vehicle data from a VIN decoder API |
| Perform calculations | Convert measurements or compute estimates |
| Retrieve real-time data | Check order status or shipping updates |
What you need
- Admin access is required to create and manage tools. Regular users cannot create tools but can use assistants that have tools assigned to them.
- Tools are written in Python and configured through the SecureAI admin panel under Workspace > Tools.
- The external API or service must be network-accessible from your SecureAI deployment.
Do tools work automatically?
Once an admin assigns a tool to an assistant, users don't need to do anything special. The assistant reads the tool's description and decides when to use it based on the user's question. Users interact with the assistant the same way they always do -- the tool calls happen behind the scenes.
If an assistant has multiple tools, it can call more than one in a single response when needed.
Limitations
- Tools require an admin to set up. Users cannot add their own tools.
- Tool execution depends on the external API being available. If the API is down, the tool call will fail and the assistant will let the user know.
- Each tool call adds processing time to the response. Assistants with many complex tools may respond more slowly.
- Tool access follows your organization's security policies. Admins control which assistants have which tools.
Getting started
- Admins: See Building Custom Tool Integrations for step-by-step setup instructions.
- Users: Ask your administrator which assistants have tools enabled, or look for assistants with descriptions mentioning live data, lookups, or integrations.
- Creating assistants: See How to Create an Assistant for the full assistant setup guide.
Can assistants use multiple knowledge bases?
Yes. You can attach more than one knowledge base to an assistant or to a single conversation. When multiple knowledge bases are attached, SecureAI searches all of them and combines the most relevant results to generate its answer.
How to attach multiple knowledge bases
In a conversation
- Start a new conversation or open an existing one.
- Click the + icon in the message input area (or use the Knowledge selector above the chat).
- Select the first knowledge base you want to use.
- Repeat to add additional knowledge bases -- there is no limit to how many you can attach.
- Ask your question. SecureAI searches across all attached knowledge bases for relevant content.
On an assistant (admin)
If your organization uses custom assistants (model presets configured by an administrator):
- Open Workspace > Models in the admin panel.
- Select the assistant you want to configure (or create a new one).
- In the assistant's knowledge settings, add one or more knowledge bases.
- Save the assistant. Every conversation using that assistant will automatically search the attached knowledge bases.
How retrieval works with multiple knowledge bases
When you ask a question, SecureAI queries all attached knowledge bases simultaneously. It ranks the retrieved chunks by relevance regardless of which knowledge base they came from, then uses the top results as context for its response. Source citations in the answer indicate which knowledge base and document each piece of information came from.
This means you do not need to merge documents into a single knowledge base just to search them together -- keep them organized by topic or source and attach whichever ones are relevant.
Tips for using multiple knowledge bases effectively
- Separate by subject area -- keep parts catalogs, service procedures, and warranty policies in distinct knowledge bases. This lets you mix and match based on the question at hand.
- Avoid duplicate content -- if the same document appears in two attached knowledge bases, the AI may retrieve redundant or conflicting chunks. Keep each document in one knowledge base only.
- Watch for version conflicts -- if you attach a knowledge base with a 2024 catalog and another with a 2025 catalog for the same product line, the AI may mix information from both years. Remove outdated versions or use only the current knowledge base when accuracy matters.
- Start focused, then expand -- attach only the knowledge bases relevant to your question. Attaching every available knowledge base increases noise and may reduce answer quality.
Can knowledge bases be versioned?
SecureAI does not have built-in version control for knowledge bases. Each knowledge base reflects its current contents -- when you add, remove, or replace a document, the change takes effect immediately.
To manage versions manually:
- Include the version or date in the knowledge base name (e.g.,
Brake Components 2025,Brake Components 2024-Archive). - Keep only current documents active -- move outdated documents to a separate archive knowledge base that you do not attach to everyday conversations.
- Replace rather than accumulate -- when a new edition of a catalog arrives, remove the old document and upload the new one in the same knowledge base rather than keeping both.
This approach gives you a clear history of what changed and prevents the AI from mixing outdated and current information.
Related articles
- How to Create and Manage Knowledge Bases -- full guide to creating, populating, and managing knowledge bases
- Knowledge Base Design Best Practices -- organizing documents, naming conventions, and chunking strategy
Can documents be private, shared with a team, or restricted by role?
Yes. SecureAI supports all three levels of document visibility through its knowledge base and assistant sharing settings. You control who can access your uploaded documents by choosing where and how you create your knowledge bases.
Private documents
When you create a knowledge base under your own account, it defaults to private. Only you can see the knowledge base, attach it to conversations, or query its contents. Other users -- including administrators -- cannot browse or search your private knowledge base documents through the SecureAI interface.
Private knowledge bases are ideal for:
- Personal reference materials you don't need to share
- Draft documents you're still refining before team use
- Sensitive information relevant only to your role (e.g., individual customer notes)
Shared documents
To share documents with your team, create a workspace-level knowledge base. Workspace knowledge bases are visible to all users in the workspace. Any team member can attach them to conversations and query the documents inside.
To share a knowledge base:
- Navigate to Knowledge in the left sidebar.
- Create a new knowledge base (or open an existing one).
- Set the visibility to Workspace (or ask your administrator to move it to the workspace level if you originally created it as personal).
- Upload the documents you want to share.
All workspace members can now attach this knowledge base to their conversations.
Good to know: Sharing a knowledge base gives other users read access to its contents through AI responses. They cannot download the original files or modify the knowledge base unless they have admin permissions.
Restricting access by role
SecureAI's role system controls what users can do with knowledge bases:
| Capability | User role | Admin role |
|---|---|---|
| Create personal knowledge bases | Yes | Yes |
| Attach workspace knowledge bases to conversations | Yes | Yes |
| Create workspace-level knowledge bases | Depends on admin settings | Yes |
| Upload documents to workspace knowledge bases | Depends on admin settings | Yes |
| Delete workspace knowledge bases | No | Yes |
| Manage knowledge base visibility settings | No | Yes |
Administrators can restrict whether regular users are allowed to create workspace knowledge bases or upload documents to shared collections. This is configured in Settings > User Permissions in the admin panel.
If you need access to a knowledge base you can't see, or want to restrict a shared knowledge base to a specific group, contact your SecureAI administrator.
Assistants and document access
When you create an assistant and attach a knowledge base to it, the assistant's visibility setting controls who can use those documents indirectly:
- Private assistant with a private knowledge base -- only you can access the documents through that assistant.
- Public assistant with a workspace knowledge base -- all workspace users can query the documents through the assistant.
- Public assistant with a private knowledge base -- other users can use the assistant but the knowledge base content is accessible through the assistant's responses. Consider whether the documents should be in a workspace knowledge base instead if you intend to share the assistant widely.
Summary
| Goal | How |
|---|---|
| Keep documents private | Create a personal knowledge base (default) |
| Share documents with the whole team | Create a workspace-level knowledge base |
| Limit who can create shared knowledge bases | Admin configures user permissions |
| Share documents through an assistant | Attach knowledge base to a public assistant |
For step-by-step instructions on creating and managing knowledge bases, see How to Create and Manage Knowledge Bases. For details on assistant sharing, see How to Create an Assistant.
How does RAG work in SecureAI?
RAG (Retrieval-Augmented Generation) is how SecureAI searches your uploaded documents and uses them to answer questions. Instead of relying only on what the AI model was trained on, RAG pulls in relevant information from your knowledge bases so the AI can give answers grounded in your organization's actual data.
How document search works step by step
When you ask a question with a knowledge base attached, SecureAI follows this process:
- Your question is converted into a search query. SecureAI transforms your message into a numerical representation (called an embedding) that captures its meaning.
- The system searches your knowledge base. It compares your query against all the document chunks stored in the knowledge base and finds the ones most closely related to what you asked.
- The most relevant chunks are retrieved. SecureAI selects the top-matching passages from your documents. These are the "retrieval" part of RAG.
- The AI generates a response using those chunks. The retrieved passages are included as context alongside your question, and the AI model produces an answer grounded in that content. This is the "augmented generation" part.
- Sources are cited. The response includes references to the specific documents and passages the AI used, so you can verify the information.
What is a knowledge base?
A knowledge base is a collection of documents you upload to SecureAI. When you upload a file, SecureAI:
- Parses the document -- extracts text from PDFs, Word files, spreadsheets, and other supported formats.
- Splits the text into chunks -- breaks long documents into smaller passages (typically a few hundred words each) so the search can return focused, relevant sections rather than entire documents.
- Creates embeddings -- converts each chunk into a numerical vector that represents its meaning. These embeddings are stored in a vector database for fast similarity search.
When does RAG activate?
RAG only runs when a knowledge base is attached to your conversation or assistant. If no knowledge base is attached, the AI responds using only its built-in training data.
You can attach a knowledge base by:
- Clicking the + icon in the message input area and selecting a knowledge base.
- Using the Knowledge selector above the chat area.
- Using an assistant that has knowledge bases pre-configured by an administrator.
What affects the quality of RAG results?
| Factor | Impact |
|---|---|
| Document quality | Clean, well-structured documents with clear headings produce better chunks and more accurate retrieval. Scanned PDFs without OCR or documents with heavy formatting noise may not parse well. |
| Question specificity | Specific questions (e.g., "What is the torque spec for a 2024 Camry front brake caliper?") retrieve more relevant results than vague ones (e.g., "Tell me about brakes"). |
| Knowledge base scope | Attaching only the knowledge bases relevant to your question reduces noise. Attaching every available knowledge base can dilute results. |
| Document size and chunking | Very short documents may not provide enough context per chunk. Very long documents with mixed topics may produce chunks that blend unrelated content. |
| Recency | Knowledge bases reflect their current contents. If you need up-to-date information, make sure the latest version of a document has been uploaded. |
Limitations to be aware of
- RAG does not search the internet. It only searches documents you have uploaded to your knowledge bases.
- Context window limits apply. The AI can only use a limited number of retrieved chunks per response. If your question spans many topics across many documents, some relevant information may not be included.
- The AI may still generate text beyond what the documents say. While RAG grounds the AI in your data, it can still combine retrieved information with its general knowledge. Check source citations to confirm which parts of an answer came from your documents.
- Exact keyword matches are not guaranteed. RAG uses semantic (meaning-based) search, not keyword search. A question about "replacing brake pads" will match documents about "brake pad replacement" even if the exact wording differs -- but it may also match loosely related content about brakes in general.
Related articles
- Can assistants use multiple knowledge bases? -- attaching and managing multiple knowledge bases
- How to Create and Manage Knowledge Bases -- uploading documents and configuring knowledge bases
- Knowledge Base Design Best Practices -- organizing documents and chunking strategy
What file types and sizes are supported for upload?
SecureAI supports a range of document and image file types for upload to knowledge bases and conversations. This page covers supported formats, size limits, and how many documents you can upload.
Supported file types
Documents
| Format | Extension | Notes |
|---|---|---|
.pdf |
Text-based PDFs are fully indexed. Scanned PDFs without an OCR text layer are not searchable -- run them through OCR software before uploading. | |
| Plain text | .txt |
Best for structured data like parts lists or logs. |
| Markdown | .md |
Headings and formatting are preserved during indexing. |
| Microsoft Word | .docx |
Text and basic formatting are extracted. Embedded images are not indexed. |
| Microsoft Excel | .xlsx |
Tabular data is extracted. Works well for parts cross-reference tables and fitment data. |
| Microsoft PowerPoint | .pptx |
Slide text is extracted. Embedded images are not indexed. |
| CSV | .csv |
Good for structured datasets like parts catalogs or price lists. |
Images (in conversations)
| Format | Extension | Notes |
|---|---|---|
| JPEG | .jpg, .jpeg |
Best for photos of parts, labels, or VIN plates. |
| PNG | .png |
Best for screenshots and diagrams. |
| GIF | .gif |
Static images only; animation frames are not analyzed. |
| WebP | .webp |
Supported for general image uploads. |
Images uploaded in a conversation are sent directly to the AI model for visual analysis. Images uploaded to a knowledge base are not visually analyzed -- only any embedded text metadata is indexed.
File size limits
| Limit | Value |
|---|---|
| Maximum file size | 100 MB per file |
| Recommended file size | Under 25 MB for fastest processing |
| PDF page limit | No hard limit, but PDFs over 100 pages may take significantly longer to index. Consider splitting very large PDFs into smaller sections. |
Large files are processed in the background after upload. You will see a progress indicator while indexing completes. Documents are not searchable until indexing finishes.
How many documents can I upload?
There is no fixed limit on the number of documents per knowledge base. However, keep these practical considerations in mind:
- Retrieval quality decreases with unfocused collections. A knowledge base with 500 unrelated documents will return less relevant results than several focused knowledge bases with 50 documents each. See How should I structure knowledge bases? for guidance.
- Indexing time scales with volume. Uploading hundreds of documents at once may take several minutes to fully index. Upload in batches if you need some documents available immediately.
- Storage quotas may apply. Your administrator may configure storage limits for your organization. If you receive an error when uploading, check with your administrator about available storage.
Tips for automotive aftermarket documents
- Parts catalog PDFs: Split large multi-section catalogs into one PDF per product line or category. This improves both indexing speed and retrieval accuracy.
- Fitment data: CSV or Excel format works best for fitment tables. The AI can cross-reference rows more accurately than when parsing PDF tables.
- Technical service bulletins: Upload as individual PDFs rather than combined volumes. This lets the AI retrieve the specific bulletin relevant to a question.
- Scanned documents: If you have scanned paper manuals or spec sheets, run them through OCR software first. Without a text layer, SecureAI cannot search the content.
What happens if my file is not supported?
If you try to upload an unsupported file type, SecureAI will display an error message and the file will not be added. Convert the file to a supported format (PDF or plain text work for most cases) and try again.
Related articles
- How should I structure knowledge bases? -- organizing documents for better retrieval
- Can assistants use multiple knowledge bases? -- attaching multiple knowledge bases to conversations
- How does RAG work in SecureAI? -- how uploaded documents are searched and used in responses
What is the difference between an assistant and a model?
A model is a raw AI engine -- it reads your messages and generates responses. An assistant is a pre-configured package that wraps a model together with a system prompt, knowledge bases, and tools so it behaves in a specific way out of the box.
Models
A model (such as GPT-4o, Claude, or Gemini) provides general-purpose intelligence. When you select a model directly in the model selector, you get its default behavior with no additional instructions, no attached documents, and no tool integrations. You are starting from a blank slate every conversation.
Models differ in capability, speed, and cost:
| Trait | What it means |
|---|---|
| Capability | Some models are better at reasoning, writing, or code. Others are faster but less detailed. |
| Speed | Smaller models respond quickly. Larger models take longer but produce more thorough answers. |
| Cost | Each message uses tokens. Larger models cost more tokens per response. |
Assistants
An assistant bundles a model with extra configuration so users don't have to set it up themselves:
- System prompt -- instructions that shape the assistant's tone, focus, and behavior (e.g., "You are a parts specialist for the automotive aftermarket").
- Knowledge bases -- documents the assistant can search automatically using RAG, such as parts catalogs, service procedures, or warranty policies.
- Tools -- external API integrations the assistant can call, like inventory lookups or VIN decoders.
When you select an assistant instead of a plain model, all of that configuration is applied instantly.
When to use each
| Scenario | Choose |
|---|---|
| Quick, general-purpose question with no specialized context | A model |
| Conversation that needs your organization's documents or live data | An assistant with the right knowledge bases and tools |
| Repeatable workflow (e.g., parts lookup, estimate drafting) | An assistant configured for that job |
If your administrator has created assistants for your team, those will appear alongside regular models in the model selector. Select one to start a conversation with its full configuration applied.
A practical example
Imagine your organization has a Parts Lookup Assistant:
- Model: A fast model optimized for quick answers.
- System prompt: "You are a parts specialist. Always include part numbers, fitment, and pricing when available."
- Knowledge bases: Your aftermarket parts catalogs and pricing sheets.
- Tools: A live inventory lookup that checks current stock levels.
If you ask "Do we have brake pads for a 2022 Camry?" using this assistant, it searches your parts catalogs, calls the inventory tool, and responds with part numbers, prices, and stock status -- all without you configuring anything.
If you asked the same question using the raw model, it would only have its general training to draw on and could not access your catalogs or inventory system.
Related articles
- What are workspaces, models, tools, and knowledge bases? -- overview of all core concepts
- How to Choose the Right AI Model -- guidance on selecting models
- Can assistants call APIs and use tools? -- tool capabilities and setup
- How to Create an Assistant -- assistant configuration guide (admins)
integrations
Can SecureAI connect to Slack, Microsoft 365, or Google Drive?
Yes. SecureAI supports built-in integrations with all three platforms. Each one is configured by an administrator in the SecureAI admin panel -- no custom development is required.
Slack
SecureAI's Slack integration enables two-way communication. Once connected, team members can mention @SecureAI in a Slack channel to ask questions directly, without opening the web interface. SecureAI responds in a thread with the answer. You can also DM the bot for private queries.
To connect: ask your admin to go to Settings > Integrations > Slack and install the SecureAI Slack app.
Good to know: Slack conversations are separate from web conversations -- context does not carry over between the two. File uploads through Slack are limited to text and PDF.
Microsoft 365
SecureAI integrates with SharePoint, OneDrive, Teams, and Outlook. Your admin can connect one or more of these services so that documents stored in SharePoint or OneDrive are searchable in SecureAI conversations. If Teams is enabled, you can query SecureAI from a Teams channel using @SecureAI, similar to the Slack integration.
To connect: ask your admin to go to Settings > Integrations > Microsoft 365 and sign in with a Microsoft 365 admin account.
Good to know: Requires a Microsoft 365 Business or Enterprise plan. SharePoint site-level permissions are respected -- SecureAI only indexes content the authenticated account can access.
Google Drive
SecureAI connects to your organization's Google Drive so that documents, spreadsheets, and PDFs stored in Drive can be referenced in conversations. This is especially useful for parts catalogs and spec sheets your team already maintains in Drive.
To connect: ask your admin to go to Settings > Integrations > Google Drive and authenticate with a Google Workspace account.
Good to know: SecureAI can read Drive files but cannot modify them. Large files (over 50 MB) may be partially indexed.
Learn More
For detailed setup instructions, supported use cases, and a comparison of all available integrations, see the Integration Overview article.
Do integrations require admin approval?
Yes. All integrations in SecureAI must be configured and enabled by an administrator. Standard users cannot connect, modify, or disconnect integrations on their own.
Why admin approval is required
Integrations connect SecureAI to external systems that contain sensitive data -- customer records, internal documents, communications, and proprietary databases. Requiring admin control ensures that:
- Only authorized data sources are connected to SecureAI.
- Credentials (API keys, OAuth tokens) are managed centrally and never exposed to end users.
- Access scope is deliberately chosen -- admins decide exactly which folders, channels, or data objects SecureAI can reach.
- Audit trails capture who enabled each integration and when.
How to request an integration
If you need an integration that is not currently enabled:
- Contact your SecureAI administrator.
- Let them know which platform you need connected (e.g., Google Drive, Slack, HubSpot).
- The admin enables the integration in Settings > Integrations in the admin panel.
Once enabled, the integration is available to all users in your organization -- no per-user setup is needed.
Can integrations be read-only?
Yes. Most built-in integrations are read-only by default, meaning SecureAI can pull data in but cannot create, modify, or delete anything in the connected platform.
| Integration | Access type | What SecureAI can do |
|---|---|---|
| Google Drive | Read-only | Index and search documents. Cannot edit, delete, or upload files to Drive. |
| HubSpot | Read-only | Read contacts, companies, deals, and tickets. Cannot create or update HubSpot records. |
| Microsoft 365 (SharePoint, OneDrive, Outlook) | Read-only | Index documents and read emails. Cannot modify files or send emails. |
| Slack | Two-way | Read messages in connected channels and post responses. Cannot delete messages or manage channels. |
| Microsoft Teams | Two-way | Read messages in connected channels and post responses. Cannot manage channels or users. |
| Custom API | Configurable | Scope is set by the admin when generating the API key: read, write, or admin. |
For Slack and Teams, the two-way access is limited to posting responses when a user mentions @SecureAI. The bot cannot initiate conversations or take actions outside its configured channels.
Admin controls for integrations
Administrators manage integrations through several controls:
- Enable or disable each integration independently in the admin panel.
- Select data scope -- choose which Drive folders, SharePoint sites, Slack channels, or HubSpot objects SecureAI can access.
- Set sync intervals -- control how frequently SecureAI pulls data from connected platforms.
- Revoke access at any time by disconnecting the integration or rotating credentials.
- Restrict chat features -- admins can also disable related features like web search, file uploads, and code execution under Settings > Interface.
For details on feature restrictions, see Can admins restrict models and integrations?.
Related articles
- Integration Overview: What Connects to SecureAI -- full list of available integrations with setup instructions
- Can SecureAI connect to Slack, Microsoft 365, or Google Drive? -- quick overview of built-in connectors
- Can admins restrict models and integrations? -- admin controls for models, tools, and features
- Building Custom Tool Integrations -- creating custom tools with configurable API access
using-secureai
Can SecureAI run with private models?
Yes. SecureAI supports private, self-hosted models alongside cloud-hosted providers. You can connect models running on your own infrastructure using Ollama, vLLM, llama.cpp, LocalAI, or any OpenAI-compatible server, so that no data leaves your network.
What counts as a "private model"?
A private model is any LLM that runs on infrastructure you control rather than calling an external provider's API. Common setups include:
- Ollama running open-weight models like Llama, Mistral, or CodeLlama on a local server or GPU cluster.
- vLLM or llama.cpp serving fine-tuned or specialized models behind an OpenAI-compatible endpoint.
- Azure OpenAI deployed in your own Azure tenant, where data stays within your controlled cloud environment.
How do I connect a private model?
An administrator configures private models in Admin Panel > Settings > Connections. The setup requires only the model server's endpoint URL — no API key is needed if the server is on the same network as SecureAI.
For step-by-step instructions, see Adding Custom Model Providers.
Can I use private and cloud models at the same time?
Yes. SecureAI aggregates models from all configured providers into a single model selector. Users can choose between private models (for sensitive data) and cloud models (for tasks where speed or capability matters) on a per-conversation basis.
Why use private models?
| Benefit | Details |
|---|---|
| Data residency | Prompts and responses never leave your infrastructure |
| Compliance | Meet regulatory requirements that prohibit sending data to third-party APIs |
| Cost control | No per-token charges — only your infrastructure costs |
| Custom fine-tuning | Run models fine-tuned on your organization's proprietary data |
Are there any limitations?
Private models depend on your hardware. Response speed and quality vary based on the model size and the GPU/CPU resources available. Smaller open-weight models may not match the capability of large cloud models for complex reasoning tasks. Monitor performance in Admin Panel > Dashboard > Usage Statistics.
Can we use OpenAI, Anthropic, or Azure OpenAI?
Yes. SecureAI supports multiple AI model providers. You can connect OpenAI, Anthropic, Azure OpenAI, and local models -- all accessible through a single unified interface.
Supported providers
| Provider | Connection method | Example models |
|---|---|---|
| OpenAI | API key | GPT-4o, GPT-4 Turbo, o1, o3 |
| Anthropic | API key | Claude Opus, Claude Sonnet, Claude Haiku |
| Azure OpenAI | Endpoint URL + API key | GPT-4o, GPT-4 (deployed in your Azure tenant) |
| Local models | Ollama, vLLM, or any OpenAI-compatible server | Llama, Mistral, CodeLlama, Gemma |
Administrators can enable any combination of these providers. Users see all available models in a single model selector dropdown.
How are providers configured?
An administrator adds providers in Admin Panel > Settings > Connections:
- OpenAI -- Enter your OpenAI API key. All models available on your OpenAI account appear automatically.
- Anthropic -- Enter your Anthropic API key. Claude models appear in the model selector.
- Azure OpenAI -- Enter your Azure endpoint URL and API key. Only models deployed in your Azure tenant are listed.
- Local models -- Enter the endpoint URL of your Ollama, vLLM, or compatible server. No API key is needed for servers on the same network.
For step-by-step instructions, see Adding Custom Model Providers.
Can I use multiple providers at the same time?
Yes. SecureAI aggregates models from all configured providers into one model selector. You can switch between providers on a per-conversation basis. For example, use Claude for analysis tasks and a local Llama model for conversations involving sensitive internal data.
Which provider should I choose?
It depends on your priorities:
- OpenAI or Anthropic -- Best model quality and speed. Data is sent to the provider's API.
- Azure OpenAI -- Enterprise-grade cloud models with data residency in your Azure tenant. Useful for compliance requirements.
- Local models -- No data leaves your network. Best for strict data privacy requirements, but performance depends on your hardware.
Your administrator may restrict which providers and models are available to you. See Can admins restrict models and integrations? for details.
Do I need my own API keys?
No. Administrators configure provider API keys centrally. Individual users do not need their own keys -- they simply select a model from the dropdown and start chatting.
What AI models are supported?
SecureAI supports a wide range of AI models from multiple providers. Your administrator controls which models are available in your organization's instance.
Cloud-hosted model providers
SecureAI includes built-in support for the following cloud-hosted providers:
| Provider | Example models | Strengths |
|---|---|---|
| OpenAI | GPT-4o, GPT-4, GPT-4o-mini | General-purpose reasoning, code generation, creative writing |
| Anthropic | Claude 4 Sonnet, Claude 4 Opus, Claude 3.5 Haiku | Long-context analysis, nuanced reasoning, document comprehension |
| Gemini 2.5 Pro, Gemini 2.5 Flash | Multimodal tasks, large context windows, fast responses |
Model availability depends on your organization's subscription and the API keys your administrator has configured. You will only see models that your administrator has enabled.
Self-hosted and local models
SecureAI also supports self-hosted model providers, giving your organization full control over data residency and model selection:
- Ollama -- run open-source models locally. Popular choices include Llama 3, Mistral, and Phi-3.
- vLLM -- high-performance inference server for hosting large models on your own GPU infrastructure.
- Any OpenAI-compatible API -- SecureAI can connect to any endpoint that implements the OpenAI API format, including custom fine-tuned models and specialized inference servers.
With self-hosted models, prompts never leave your infrastructure. This is the preferred option for organizations with strict data residency requirements.
How to see which models are available to you
- Open a new chat in SecureAI.
- Click the model selector dropdown at the top of the chat area.
- The list shows all models your administrator has enabled for your account.
If a model you need is not listed, ask your administrator to enable it. Administrators manage model availability from Admin Panel > Settings > Models.
Choosing the right model
Different models are suited for different tasks:
| Task | Recommended approach |
|---|---|
| Quick questions and simple tasks | Use a smaller, faster model (e.g., GPT-4o-mini, Claude 3.5 Haiku, Gemini 2.5 Flash) for lower latency and cost. |
| Complex analysis and reasoning | Use a larger model (e.g., GPT-4o, Claude 4 Opus, Gemini 2.5 Pro) for tasks requiring deeper reasoning or multi-step problem solving. |
| Working with long documents | Choose a model with a large context window. Claude and Gemini models support context windows up to 200K+ tokens. |
| Sensitive or regulated data | Use a self-hosted model (via Ollama or vLLM) to keep all data within your infrastructure. |
Your organization may have guidelines on which models to use for specific types of work. Check with your administrator if you are unsure.
Can my administrator restrict model access?
Yes. Administrators can set each model's visibility to "All users" or "Admins only." Models set to "Admins only" are hidden from standard users entirely. For details, see Can admins restrict models and integrations?.
Related articles
- Can admins restrict models and integrations? -- model visibility and access controls
- How is SecureAI different from ChatGPT? -- multi-model access as a key differentiator
- Adding Custom Model Providers -- connecting additional model providers
- How is data encrypted in SecureAI? -- encryption for data sent to model providers
What is a token and how is usage measured?
A token is the basic unit AI models use to process text. Understanding tokens helps you estimate costs, choose the right model for a task, and stay within your organization's usage limits.
What is a token?
AI models do not read text word by word. Instead, they break text into smaller pieces called tokens. A token can be a whole word, part of a word, a punctuation mark, or a space.
As a rough guide:
- 1 token is approximately 4 characters or 0.75 words in English.
- A short sentence like "How do I reset my password?" is about 8 tokens.
- A full page of text (roughly 500 words) is about 650--700 tokens.
- A 10-page document is roughly 6,500--7,000 tokens.
The exact number of tokens depends on the specific model's tokenizer. Different providers (OpenAI, Anthropic, Google) use slightly different tokenization methods, so the same text may produce slightly different token counts across models.
How is usage measured?
Every time you send a message in SecureAI, usage is measured in two parts:
| Component | What it counts |
|---|---|
| Input tokens | Your message, any system prompt, conversation history sent for context, and any documents or knowledge base content retrieved via RAG |
| Output tokens | The AI model's response |
Total tokens per message = input tokens + output tokens.
A few things that affect your token count:
- Conversation history -- as a conversation grows longer, each new message includes prior messages as context, increasing input tokens. Starting a new chat resets this.
- Knowledge base (RAG) retrieval -- when SecureAI pulls relevant documents to answer your question, those retrieved passages count as input tokens.
- Attachments and uploads -- files you attach to a message (PDFs, text files, images) are converted to tokens and included in the input.
- System prompts -- your organization's system prompt is included with every message as input tokens. Longer system prompts increase per-message costs.
How many tokens does a typical message use?
Token usage varies widely depending on the task:
| Scenario | Approximate tokens |
|---|---|
| Simple question and short answer | 200--500 total |
| Question with a few paragraphs of context | 1,000--2,000 total |
| Analyzing an uploaded document (10 pages) | 7,000--10,000 total |
| Long conversation (20+ back-and-forth messages) | 10,000--30,000+ total |
These are estimates. Actual usage depends on the model, the length of your prompts and responses, and how much context is included.
Why do costs vary by model?
Different models charge different rates per token. Generally:
- Smaller, faster models (e.g., GPT-4o-mini, Claude 3.5 Haiku, Gemini 2.5 Flash) cost less per token and are suited for routine tasks.
- Larger, more capable models (e.g., GPT-4o, Claude 4 Opus, Gemini 2.5 Pro) cost more per token but handle complex reasoning and analysis better.
- Self-hosted models (via Ollama or vLLM) have no per-token API cost -- you pay only for the infrastructure to run them.
Your administrator may set usage limits per user or per model to manage costs. If you hit a limit, you will see a message indicating that your usage quota has been reached.
Tips for managing token usage
- Start new chats for unrelated questions instead of continuing a long conversation. This keeps input token counts low.
- Use smaller models for simple tasks like summarization, formatting, or quick lookups.
- Be specific in your prompts. Clear, focused questions produce shorter, more relevant responses.
- Limit attachments to the relevant pages or sections rather than uploading entire large documents when possible.
Where can I see my usage?
Usage visibility depends on your role:
- Users -- check your usage in Settings > Account > Usage. You can see token counts per conversation and overall totals for the current billing period.
- Administrators -- view organization-wide usage in Admin Panel > Usage. This includes per-user breakdowns, per-model costs, and trend data.
For billing details, see How is SecureAI billed?.
Related articles
- How is SecureAI billed? -- pricing tiers and billing cycles
- What AI models are supported? -- available models and their strengths
- How does RAG work in SecureAI? -- how knowledge base retrieval affects token usage
- What are workspaces, models, tools, and knowledge bases? -- core concepts overview
administration
Can admins restrict models and integrations?
Yes. Administrators have full control over which models are available, which integrations are enabled, and who can access shared documents. Here is a summary of each control area.
Restricting model access
Admins can control which models appear in the model selector for standard users:
- Navigate to Admin Panel > Settings > Models.
- Set each model's Visibility to one of:
- All users -- the model appears for everyone.
- Admins only -- the model is hidden from standard users.
Models set to "Admins only" do not appear in the model selector for non-admin users. This is useful for limiting access to expensive models or models still being evaluated.
For details on adding and configuring model providers, see Adding Custom Model Providers.
Disabling integrations and chat features
Admins can toggle specific features on or off for all non-admin users:
- Navigate to Admin Panel > Settings > Interface.
- Toggle any of the following:
- Web search -- allow or block users from enabling web search in conversations.
- Image generation -- allow or block image generation requests.
- Code execution -- allow or block sandboxed code execution.
- File uploads in chat -- allow or block file attachments in messages.
Disabled features are hidden from the interface entirely -- users do not see them at all.
Tool integrations are managed separately under Workspace > Tools. Only admins can create, edit, or delete tools and configure tool integrations (valves). Standard users can only use tools that have been assigned to their assistants.
For details on building and managing tools, see Building Custom Tool Integrations.
Controlling document access
Admins control who can upload documents to the shared knowledge base:
- Navigate to Admin Panel > Settings > Knowledge Base.
- Under Shared uploads, choose:
- Allow all users -- any user can upload to shared collections.
- Admins only -- only admins can add to shared collections. Users can still upload to their personal workspace.
Admins can also delete any user's uploaded documents and manage knowledge base collections. Standard users can only delete their own uploads.
For the full permission breakdown, see Managing User Roles and Permissions.
Quick reference
| Control | Where to configure | Options |
|---|---|---|
| Model visibility | Admin Panel > Settings > Models | All users, Admins only |
| Chat features | Admin Panel > Settings > Interface | Toggle per feature |
| Tool management | Workspace > Tools | Admin-only by default |
| Shared KB uploads | Admin Panel > Settings > Knowledge Base | All users, Admins only |
| Document deletion | Admin Panel > Knowledge Base | Admins can delete any; users delete own |
Can admins see user chats?
Yes. Administrators can view all conversations within your organization. This access exists for compliance, auditing, and user support purposes.
What admins can see
Administrators have access to:
- Full conversation history -- every message in every conversation for all users in the organization, including prompts and AI responses.
- Conversation metadata -- titles, timestamps, model selection, and session identifiers.
- Uploaded documents -- files uploaded to knowledge bases or attached to conversations.
Regular users cannot see other users' conversations. Only accounts with the Admin role have cross-user visibility.
How admins access conversations
Administrators view conversations through the Admin Panel:
- Go to Admin Panel > Users.
- Select the user whose conversations you want to review.
- Click Conversations to see their full chat history.
Conversation metadata is also visible in the analytics dashboard under Admin Panel > Analytics.
Exporting conversations
Administrators can export conversation data for compliance or record-keeping:
- Single user export -- from the Admin Panel, select a user and choose Export Conversations. Available formats: Markdown, PDF, Plain Text.
- Bulk export -- use the Admin Panel or the API to export conversation data across multiple users. This is useful for compliance audits, GDPR data subject access requests, or internal reviews.
- Audit log export -- export activity logs that show conversation events (creation, deletion, export) alongside other user actions.
Individual users can export their own conversations from the chat interface. See Can I search and export past chats? for details.
Admin access is logged
Every time an administrator views or exports another user's conversations, the action is recorded in the audit trail. These audit entries include:
- Which administrator accessed the data
- Which user's conversations were viewed
- When the access occurred
- What action was taken (view, export, delete)
This ensures accountability and supports compliance requirements like SOC 2 and GDPR. See How to Audit User Activity for details on reviewing audit logs.
Data retention and deletion
Administrators control how long conversation data is kept through configurable retention policies. When a retention period expires, conversations are permanently deleted. Administrators can also manually delete specific conversations or request full data deletion for a user account.
See Configuring Data Retention Policies for setup instructions.
Related articles
- How SecureAI Handles Your Data -- full data handling and privacy overview
- How to Audit User Activity -- reviewing audit logs and exporting activity data
- Managing User Roles and Permissions -- role types and permission matrix
- Configuring Data Retention Policies -- retention period setup
- Compliance Certifications -- SOC 2, GDPR, HIPAA -- compliance framework details
Does SecureAI support Google, Microsoft, Okta, or Auth0 login?
Yes. SecureAI supports single sign-on (SSO) with all major identity providers, including Google Workspace, Microsoft Azure AD (Entra ID), Okta, and Auth0. Two protocols are supported:
| Protocol | Compatible providers |
|---|---|
| SAML 2.0 | Okta, Azure AD, Auth0, OneLogin, PingFederate, and any SAML 2.0-compliant IdP |
| OIDC (OpenID Connect) | Google Workspace, Okta, Azure AD, Auth0, Keycloak, and any OIDC-compliant IdP |
How it works
When SSO is configured, users sign in through your organization's identity provider instead of managing a separate SecureAI password. Your existing access policies, MFA requirements, and session controls all apply automatically.
Setting it up
An administrator configures SSO in the SecureAI admin panel. The process involves registering SecureAI as an application in your identity provider and entering the connection details in SecureAI.
- For OIDC providers (Google, Okta, Azure AD, Auth0, Keycloak): see How to Configure OIDC SSO.
- For SAML providers (Okta, Azure AD, Auth0, OneLogin): see How to Configure SAML SSO.
Most providers support both protocols. Choose OIDC if your provider supports it — it is simpler to configure.
Local accounts
If your organization does not use SSO, users can sign in with an email and password. Local accounts and SSO can coexist — for example, an admin might keep a local account as a fallback while all other users sign in through SSO.
Related
- Can we enforce MFA? — covers MFA enforcement for both SSO and local accounts.
- I can't log in — troubleshooting login issues including SSO problems.
What user roles exist in SecureAI?
SecureAI has three built-in roles: User, Admin, and Pending. Every account is assigned exactly one role, which controls what that person can see and do in the platform.
The three roles
User
The standard role for everyone who uses SecureAI day-to-day. Users can:
- Start and continue conversations with any available AI model.
- Upload documents to their personal workspace and search the shared knowledge base.
- Use assigned tools and assistants, and create personal assistants.
- Share conversations with other users.
- Manage their own profile, password, and API keys.
Users cannot access the Admin Panel or see other users' conversations.
Admin
The role for IT staff and platform managers who need to configure and manage the SecureAI instance. Admins have all the same capabilities as Users, plus:
- Access to the Admin Panel for managing users, models, security settings, and integrations.
- Ability to invite and remove users, change roles, and approve pending accounts.
- Ability to configure model providers, content filtering, SSO, data retention, and IP allowlisting.
- Visibility into conversation metadata and usage analytics (but not conversation content by default -- see Can admins see user chats?).
- Ability to export audit logs and manage organization-wide settings.
Most organizations need only 2-5 admins.
Pending
A temporary role for users who registered through the sign-up page rather than being invited by an admin. Pending users can log in and update their profile, but they cannot start conversations, search the knowledge base, or use tools until an admin approves them.
To approve a pending user, go to Admin Panel > Users, filter by Role: Pending, select the user, change their role to User (or Admin), and save.
How do I manage user permissions?
Admins assign and change roles from the Admin Panel:
- Navigate to Admin Panel > Users.
- Click the user's name.
- Under Role, select the new role.
- Click Save.
Role changes take effect immediately -- no logout required. If you need to change roles for many users at once, use Bulk Actions > Change Role after selecting multiple users.
Beyond the three roles, admins can fine-tune what users can do through feature-level restrictions:
- Model access -- restrict specific models to admins only (useful for expensive models or models being evaluated).
- Shared knowledge base uploads -- limit who can add documents to shared collections.
- Assistant publishing -- control whether users can publish assistants to the organization directory.
- Chat features -- toggle web search, image generation, code execution, and file uploads for non-admin users.
For the full permission matrix and detailed configuration instructions, see Managing User Roles and Permissions.
Related articles
- Managing User Roles and Permissions -- full permission matrix, feature restrictions, and best practices
- Can admins see user chats? -- admin conversation visibility and audit logging
- How do I cancel a user's access? -- deactivating or removing user accounts
- Can I add more users? -- adding seats and billing
security-compliance
Can we enforce MFA?
Yes. How you enforce multi-factor authentication depends on your authentication method.
SSO users (recommended approach)
If your organization uses SAML or OIDC single sign-on, enforce MFA at your identity provider (Okta, Azure AD, Auth0, etc.). This is the recommended approach because:
- Your identity provider's MFA policies apply to all applications, not just SecureAI.
- You get centralized control over MFA methods, enrollment, and recovery.
- SecureAI respects the authentication assurance your identity provider establishes during the SSO handshake.
To avoid duplicate MFA prompts, disable SecureAI's built-in MFA for SSO users:
- Navigate to Admin Panel > Settings > Authentication.
- Enable Disable Local MFA for SSO.
This ensures users are only prompted for MFA once, at the identity provider level.
For SSO configuration details, see How to Configure SAML SSO or How to Configure OIDC SSO.
Local account users
For organizations using local (email/password) accounts, SecureAI supports time-based one-time passwords (TOTP) as a second factor:
- Navigate to Admin Panel > Settings > Authentication.
- Enable Require MFA for Local Accounts.
Once enabled, users without MFA configured are prompted to set it up at their next login. Users can enroll using any TOTP-compatible authenticator app (Google Authenticator, Authy, 1Password, etc.).
What MFA methods are supported?
| Authentication method | MFA enforcement | Supported MFA types |
|---|---|---|
| SAML SSO | At your identity provider | Whatever your IdP supports (push, TOTP, FIDO2, etc.) |
| OIDC SSO | At your identity provider | Whatever your IdP supports |
| Local accounts | In SecureAI admin settings | TOTP (time-based one-time passwords) |
Compliance considerations
Enforcing MFA is a common requirement for SOC 2, HIPAA, and other compliance frameworks. If your organization uses SSO, your identity provider's MFA enforcement satisfies this requirement for SecureAI access. For details on SecureAI's compliance posture, see Compliance and Certifications.
Does SecureAI support SOC 2, GDPR, or HIPAA?
Yes. SecureAI supports compliance with SOC 2, GDPR, and HIPAA. The specifics depend on which framework applies to your organization.
SOC 2 Type II
SecureAI's infrastructure runs on Google Cloud Platform (GCP), which maintains its own SOC 2 Type II certification. SecureAI maintains additional application-level controls covering access control (RBAC, SSO, MFA), encryption (AES-256 at rest, TLS 1.2+ in transit), audit logging, incident response, and change management.
To obtain SecureAI's SOC 2 Type II audit report, contact your account representative. The report is shared under NDA.
If your organization is undergoing its own SOC 2 audit and uses SecureAI as a subservice, reference SecureAI's report in your Complementary Subservice Organization Controls (CSOCs) section.
GDPR
SecureAI supports GDPR compliance through:
- Data Processing Agreements (DPAs) -- available on request from your account representative.
- Data subject rights -- administrators can export, rectify, and delete user data through the admin panel or API to fulfill access, erasure, and portability requests.
- EU data residency -- data can be hosted in the europe-west1 (Belgium) region so that all stored data stays within the EU. This includes conversations, documents, user accounts, and audit logs.
- Sub-processor transparency -- the full sub-processor list is included in the DPA.
HIPAA
SecureAI supports HIPAA compliance for organizations that handle protected health information (PHI):
- Business Associate Agreements (BAAs) -- available on request. Contact your account representative to confirm applicability.
- Technical safeguards -- unique user identification, role-based access, AES-256 encryption at rest with CMEK support, comprehensive audit logging, and TLS 1.2+ for all transmission.
- Administrative responsibilities remain with your organization, including user access management, PHI handling training, and configuring retention policies that meet HIPAA minimums (typically 6 years for administrative records).
Note: Most automotive aftermarket organizations do not handle PHI through SecureAI. If you are unsure whether HIPAA applies to your use case, consult your compliance or legal team.
Can data stay in our region?
Yes. SecureAI supports data residency in multiple regions:
| Region | GCP Location | Availability |
|---|---|---|
| United States | us-central1 (Iowa) | Default for all organizations |
| European Union | europe-west1 (Belgium) | Available on request |
| Additional regions | Contact account representative | Enterprise agreements |
Data residency applies to all stored data -- conversations, uploaded documents, user accounts, audit logs, and backups. To change regions after initial deployment, contact your account representative; migration requires a planned maintenance window.
Note that AI model provider interactions may involve data transfer outside your selected region. These transfers and their safeguards are documented in the DPA. For full control, configure a local model provider (Ollama or vLLM) so prompts never leave your infrastructure.
How to prove compliance to your auditor
- Request SecureAI's SOC 2 Type II report from your account representative (NDA required).
- Obtain your executed DPA (for GDPR) or BAA (for HIPAA).
- Export your security configuration from Admin Panel > Settings > Export Configuration.
- Export audit logs covering the audit period from Admin Panel > Audit Logs > Export.
Related articles
- Compliance Certifications -- SOC 2, GDPR, HIPAA -- full compliance framework details
- How SecureAI Handles Your Data -- data flow, encryption, and retention
- Configuring Data Retention Policies -- retention period setup
- Can we enforce MFA? -- multi-factor authentication options
- Setting Up IP Allowlisting for Enterprise Access -- network-level access control
How do we delete our data?
SecureAI gives administrators full control over data deletion. What you can delete and how depends on the type of data and your role.
What can be deleted
| Data type | Who can delete it | How |
|---|---|---|
| Your own conversations | You | Profile settings or conversation list -- select conversations and delete |
| Any user's conversations | Administrators | Admin Panel > Conversations > select user > delete |
| Uploaded documents | Administrators | Admin Panel > Knowledge Bases > select document > delete |
| User accounts | Administrators | Admin Panel > Users > deactivate, then request full deletion |
| All organization data | Organization owner | Contact your account representative per your service agreement |
Deletion is permanent and not recoverable. Once deleted, conversations, documents, and their associated metadata (including vector embeddings for documents) are permanently removed.
Deleting conversations
As a user: Open your conversation list, select the conversations you want to remove, and click Delete. This removes the conversation and all its messages from SecureAI.
As an administrator: Navigate to Admin Panel > Conversations. You can filter by user, date range, or keyword. Select conversations and delete them individually or in bulk.
Deleting uploaded documents
Administrators can remove documents from knowledge bases through Admin Panel > Knowledge Bases. When a document is deleted:
- The file is removed from Cloud Storage.
- Its vector embeddings are deleted from the search index.
- Future AI responses will no longer reference the document's content.
Deleting user data
When a user leaves your organization:
- Deactivate the account -- Admin Panel > Users > select user > Deactivate. The user can no longer log in, but their data remains accessible for audit purposes.
- Request full deletion -- After deactivation, select Delete All User Data to permanently remove the user's conversations, uploads, and account information.
If you need to fulfill a GDPR erasure request (right to be forgotten), the full deletion option satisfies this requirement.
Automatic deletion via retention policies
Instead of deleting data manually, administrators can configure retention policies that automatically delete data after a specified period:
- Navigate to Admin Panel > Settings > Data Retention.
- Set retention periods for conversation history (e.g., 30, 90, or 365 days).
- Data older than the retention period is permanently deleted on a rolling basis.
For setup details, see Configuring Data Retention Policies.
Organization-level data deletion
When a service agreement ends, all data associated with your organization -- conversations, documents, user accounts, and audit logs -- is permanently deleted within the timeframe specified in your agreement (typically 30 days after termination). Contact your account representative to initiate this process.
How do we audit activity?
SecureAI logs every security-relevant action. Administrators can review these logs to track who did what and when.
What is logged
| Event category | Examples |
|---|---|
| Authentication | Logins, logouts, failed login attempts, SSO events |
| User management | Account creation, deactivation, role changes |
| Data access | Document uploads, document deletions, conversation exports |
| Configuration | SSO changes, retention policy changes, API token creation and revocation |
| Administrative | Admin data access, bulk operations, system setting changes |
Viewing and exporting audit logs
- Navigate to Admin Panel > Audit Logs.
- Filter by date range, user, or event type.
- Click Export to download logs in standard formats for integration with your SIEM or compliance tools.
Audit logs are retained independently of conversation data. The retention period is defined in your service agreement and is typically longer than conversation retention.
For step-by-step instructions, see How to Audit User Activity.
What about AI model providers?
When you delete data from SecureAI, there is nothing to delete at the model provider. AI model providers (OpenAI, Anthropic, Azure OpenAI) do not retain your prompts or responses beyond the API request lifecycle. This is contractually enforced.
If your organization uses local models (Ollama or vLLM), prompts never leave your infrastructure in the first place.
Related articles
- How SecureAI Handles Your Data -- full data flow, encryption, and retention details
- Configuring Data Retention Policies -- retention period setup
- How to Audit User Activity -- step-by-step audit log guide
- Compliance Certifications -- SOC 2, GDPR, HIPAA -- compliance framework details
- Does SecureAI support SOC 2, GDPR, or HIPAA? -- compliance FAQ
How is data encrypted in SecureAI?
SecureAI encrypts all data both in transit and at rest. No data is stored or transmitted in plaintext.
Encryption in transit
All network communication uses TLS 1.2 or higher. This applies to:
- Browser to SecureAI -- all user traffic is encrypted via HTTPS. HTTP connections are automatically redirected to HTTPS.
- SecureAI to AI model providers -- API calls to upstream model providers (OpenAI, Anthropic, Google, etc.) use TLS-encrypted connections.
- Internal service communication -- traffic between SecureAI's internal services within Google Cloud Platform uses GCP's default encryption in transit.
TLS certificates are managed automatically and rotated before expiration. There is nothing you need to configure for encryption in transit.
Encryption at rest
All stored data is encrypted at rest using AES-256, the industry standard for symmetric encryption. This covers:
- Conversations and chat history -- all messages between users and AI models.
- Uploaded documents -- files uploaded for use with RAG (retrieval-augmented generation) or as chat attachments.
- User accounts and profiles -- usernames, email addresses, roles, and preferences.
- Audit logs -- all recorded user and admin activity.
- Backups -- database and file backups are encrypted with the same standard.
Default encryption
By default, SecureAI uses Google Cloud Platform's built-in encryption at rest. GCP automatically encrypts all data before it is written to disk using Google-managed encryption keys. No configuration is required.
Customer-managed encryption keys (CMEK)
For organizations that require control over their own encryption keys, SecureAI supports Customer-Managed Encryption Keys (CMEK) through Google Cloud KMS.
With CMEK enabled:
- You create and manage your encryption keys in Google Cloud KMS.
- SecureAI uses your keys to encrypt and decrypt data.
- You can rotate, disable, or revoke keys at any time.
- Revoking a key makes the associated data permanently inaccessible.
To enable CMEK, contact your account representative. CMEK is available on Enterprise plans.
Where is data stored?
SecureAI runs on Google Cloud Platform. Your data is stored in the GCP region assigned to your organization:
| Region | GCP Location | Availability |
|---|---|---|
| United States | us-central1 (Iowa) | Default for all organizations |
| European Union | europe-west1 (Belgium) | Available on request |
| Additional regions | Contact account representative | Enterprise agreements |
All data -- conversations, documents, user accounts, audit logs, and backups -- stays within your assigned region. To change regions after deployment, contact your account representative; migration requires a planned maintenance window.
For details on regional data handling and cross-border transfers, see your Data Processing Agreement (DPA).
What about data sent to AI model providers?
When a user sends a message, the prompt is transmitted to the configured AI model provider over a TLS-encrypted connection. Each provider has its own data handling policies:
- Cloud-hosted providers (OpenAI, Anthropic, Google) -- SecureAI's enterprise agreements with these providers ensure that your prompts are not used for model training. Data retention by providers is governed by SecureAI's enterprise API agreements, not consumer terms.
- Self-hosted models (Ollama, vLLM) -- if your organization runs a local model provider, prompts never leave your infrastructure. This gives you full control over data residency and eliminates third-party data exposure.
To configure a local model provider, see Adding Custom Model Providers.
How to verify your encryption configuration
- Go to Admin Panel > Settings > Security to view your current encryption and data residency settings.
- Export your security configuration from Admin Panel > Settings > Export Configuration for compliance documentation.
- Request SecureAI's SOC 2 Type II report from your account representative for independent verification of encryption controls.
Related articles
- Does SecureAI support SOC 2, GDPR, or HIPAA? -- compliance framework details
- How SecureAI Handles Your Data -- data flow and retention policies
- Configuring Data Retention Policies -- retention period setup
- Adding Custom Model Providers -- self-hosted model configuration
Is my data used to train AI models?
No. Your prompts, conversations, and uploaded documents are never used to train AI models. This applies to both SecureAI's platform and the upstream AI model providers it connects to.
SecureAI does not train on your data
SecureAI is a platform that routes your requests to AI model providers. SecureAI does not build or train its own large language models. Your data is used only to generate responses to your queries and, if configured, to power your organization's RAG (retrieval-augmented generation) knowledge bases.
SecureAI stores your conversations and uploaded documents solely for the features you use -- chat history, search, audit logging, and knowledge base retrieval. This data is never aggregated, anonymized, or otherwise repurposed for model development.
AI model providers do not train on your data
SecureAI connects to model providers (OpenAI, Anthropic, Google, Azure OpenAI) through enterprise API agreements, not consumer accounts. Under these agreements:
- Prompts and responses are not used for training. Enterprise API terms explicitly prohibit using customer inputs and outputs to train, improve, or fine-tune models.
- Data is not retained beyond the API request. Providers process your prompt, return a response, and discard the data. There is no persistent storage of your queries on the provider side.
- Zero-data-retention (ZDR) options are available with select providers for organizations that require contractual guarantees of no data logging at the provider level.
These protections apply automatically to all SecureAI users. No configuration is required.
What if we use local models?
If your organization runs local model providers (such as Ollama or vLLM), your prompts never leave your infrastructure. There is no third-party data exposure of any kind. Local models give you complete control over data residency and eliminate any concern about external training.
To set up a local model provider, see Adding Custom Model Providers.
Does SecureAI store my prompts?
Yes, SecureAI stores your conversations so that you can access your chat history, and so administrators can review activity through audit logs. This storage is:
- Encrypted at rest using AES-256 (see How is data encrypted in SecureAI?).
- Retained according to your organization's policies -- administrators can configure automatic deletion after a set period (see Configuring Data Retention Policies).
- Deletable on demand -- users can delete their own conversations, and administrators can delete any user's data (see How do we delete our data?).
- Confined to your assigned region -- data stays within the GCP region assigned to your organization.
Stored prompts are never shared with other organizations, used for analytics, or made available to SecureAI employees except when required for technical support with your explicit authorization.
How to verify these protections
- Review SecureAI's Data Processing Agreement (DPA), which contractually binds these commitments. Request a copy from your account representative.
- Request SecureAI's SOC 2 Type II report for independent verification of data handling controls.
- Review the enterprise API agreements with each model provider by contacting your account representative.
- Check Admin Panel > Settings > Security to see your current data handling and model provider configuration.
Related articles
- How is data encrypted in SecureAI? -- encryption in transit and at rest
- How do we delete our data? -- data deletion and audit logging
- Does SecureAI support SOC 2, GDPR, or HIPAA? -- compliance framework details
- How SecureAI Handles Your Data -- full data flow and retention policies
- Configuring Data Retention Policies -- automatic data retention setup
- Adding Custom Model Providers -- self-hosted model configuration
What data does SecureAI store?
SecureAI stores only the data necessary to provide the service -- your conversations, uploaded documents, account information, and audit logs. SecureAI does not store payment card numbers, does not retain data from AI model providers, and does not collect data beyond what you provide through normal use.
Data that SecureAI stores
Conversations and chat history
Every message you send and every AI response is stored so you can return to past conversations. This includes:
- User prompts -- the messages you type or paste into the chat interface.
- AI responses -- the model's replies, including any generated text, code, or structured output.
- Conversation metadata -- timestamps, which model was used, token counts, and conversation titles.
Conversations are stored in SecureAI's database within your assigned GCP region. Administrators can configure retention policies to automatically delete conversations after a defined period.
Uploaded documents
Files you upload to knowledge bases for RAG (retrieval-augmented generation) are stored along with their vector embeddings:
- Original files -- PDFs, Word documents, text files, and other supported formats are stored in Cloud Storage.
- Vector embeddings -- numerical representations of document content used for semantic search. These are stored in SecureAI's search index.
- Document metadata -- file names, upload dates, file sizes, and which knowledge base a document belongs to.
User accounts and profiles
SecureAI stores the information needed to identify and authenticate users:
- Identity information -- name, email address, and profile picture (if provided).
- Role and permissions -- whether the user is an admin, a standard user, or has custom role assignments.
- Preferences -- display settings, default model selection, and notification preferences.
- Authentication records -- hashed passwords (for local accounts) or SSO provider identifiers. Passwords are never stored in plaintext.
Audit logs
SecureAI records security-relevant events for compliance and accountability:
- Authentication events -- logins, logouts, failed login attempts, MFA events.
- Administrative actions -- user management, configuration changes, data access by admins.
- Data lifecycle events -- document uploads, document deletions, conversation exports.
Audit logs are retained independently of other data and typically have a longer retention period defined in your service agreement.
System configuration
Administrative settings are stored so your SecureAI instance maintains its configuration:
- SSO and identity provider settings.
- Content filter rules and safety configurations.
- Model provider connections and rate limits.
- Data retention policy settings.
Data that SecureAI does NOT store
Payment card numbers and billing details
SecureAI does not process or store credit card numbers, bank account details, or other payment instruments. Billing is handled through invoicing and your organization's procurement process -- not through a self-service payment form.
AI model provider data
When you send a message, the prompt is transmitted to the configured AI model provider (OpenAI, Anthropic, Google, etc.) for processing. SecureAI stores your prompt and the response, but the model provider does not retain your data beyond the API request lifecycle. Under SecureAI's enterprise API agreements, providers do not use your prompts for model training.
If your organization uses self-hosted models (Ollama, vLLM), prompts never leave your infrastructure.
Browser or device telemetry
SecureAI does not install tracking pixels, fingerprint your browser, or collect device-level telemetry. The application does not use third-party analytics services that track individual user behavior across sites.
Data from other applications
SecureAI does not access or ingest data from your email, calendar, file storage, or other business applications unless you explicitly connect an integration and an administrator approves it. Integrations only access the specific data sources you configure.
Conversation content from other users
Standard users can only see their own conversations. Administrators can access other users' conversations through the Admin Panel, but this access is logged in the audit trail. There is no cross-user data sharing unless an administrator explicitly enables a shared knowledge base.
How long is data retained?
Data retention depends on your organization's configuration and service agreement:
| Data type | Default retention | Configurable? |
|---|---|---|
| Conversations | Indefinite (until deleted) | Yes -- Admin Panel > Settings > Data Retention |
| Uploaded documents | Indefinite (until deleted) | Yes -- administrators can delete at any time |
| User accounts | Until deactivated and deleted | Yes -- administrators manage lifecycle |
| Audit logs | Per service agreement (typically 1-2 years) | Contact account representative |
| Backups | 30 days rolling | Per service agreement |
After a service agreement ends, all organization data is permanently deleted within the timeframe specified in the agreement (typically 30 days).
How to review what is stored
- Your own data -- View your conversations and uploads in the SecureAI interface. You can delete your own conversations at any time.
- Organization data -- Administrators can review stored data through the Admin Panel: conversations, documents, user accounts, and audit logs.
- Data inventory -- Request a data inventory from your account representative for compliance documentation (GDPR Article 30 records of processing).
Related articles
- How is data encrypted in SecureAI? -- encryption at rest and in transit
- How do we delete our data? -- data deletion and retention policies
- Does SecureAI support SOC 2, GDPR, or HIPAA? -- compliance framework details
- How do we manage AI safety? -- content filtering and safety controls
What does SecureAI mean by secure AI chat?
"Secure AI chat" means that every layer of the system -- from how your data is transmitted, to how it is stored, to who can access it -- is designed to protect your organization's information. SecureAI treats security as a foundational requirement, not an add-on.
Data never leaves your control
SecureAI ensures your conversations and documents stay under your organization's control:
- Encryption in transit -- all traffic between your browser, SecureAI, and AI model providers uses TLS 1.2 or higher. No data is transmitted in plaintext.
- Encryption at rest -- all stored data (conversations, uploaded documents, user accounts, audit logs) is encrypted using AES-256.
- No training on your data -- SecureAI's enterprise API agreements with model providers (OpenAI, Anthropic, Google) explicitly prohibit using your prompts or responses for model training.
- Self-hosted model option -- organizations can run models locally via Ollama or vLLM, keeping all data on their own infrastructure with zero third-party exposure.
For encryption details, see How is data encrypted in SecureAI?.
Access is controlled and auditable
SecureAI provides enterprise-grade access controls so the right people see the right data:
- Single sign-on (SSO) -- authenticate users through your existing identity provider (Google, Microsoft, Okta, Auth0) instead of managing separate credentials.
- Multi-factor authentication (MFA) -- require a second factor for all users or specific roles.
- Role-based access control (RBAC) -- assign users to roles (admin, user, viewer) that determine what they can access and configure.
- Conversation privacy -- user conversations are private by default. Admins can only access them through a controlled process that is recorded in the audit log.
For SSO setup, see Does SecureAI support Google, Microsoft, Okta, or Auth0 login?.
AI behavior is governed
Secure AI chat is not just about protecting data -- it also means controlling what the AI can say and do:
- Content filtering -- evaluate prompts and responses against configurable safety rules before they reach users. Block harmful content, PII exposure, or industry-specific terms.
- Prompt injection protection -- detect and block attempts to override system instructions or bypass safety controls.
- System prompt guardrails -- enforce organization-wide instructions that restrict the AI to your business domain and require appropriate disclaimers.
- Rate limiting -- prevent individual users from consuming excessive resources or generating high volumes of unreviewed content.
For safety configuration, see How do we manage AI safety?.
Everything is logged
Every security-relevant action in SecureAI is recorded in an immutable audit trail:
- User logins and authentication events
- Conversation access (including admin overrides)
- Content filter matches and prompt injection detections
- Admin configuration changes
- Data export and deletion requests
Audit logs can be exported as CSV for compliance reporting (SOC 2, GDPR, HIPAA). See How do we delete our data? for data lifecycle details.
Compliance frameworks
SecureAI's security controls are designed to support common compliance requirements:
- SOC 2 Type II -- independently audited controls for security, availability, and confidentiality.
- GDPR -- data residency options, data export, right-to-erasure support, and Data Processing Agreements.
- HIPAA -- available under Business Associate Agreements for healthcare organizations.
For full compliance details, see Does SecureAI support SOC 2, GDPR, or HIPAA?.
Summary
When SecureAI says "secure AI chat," it means:
| Layer | What it covers |
|---|---|
| Data protection | End-to-end encryption, no model training on your data, optional self-hosted models |
| Access control | SSO, MFA, RBAC, private conversations with audited admin access |
| AI governance | Content filtering, prompt injection protection, system prompt guardrails |
| Auditability | Immutable audit logs for all security events, exportable for compliance |
| Compliance | SOC 2, GDPR, HIPAA support with independent verification |
Related articles
- How is data encrypted in SecureAI? -- encryption standards and key management
- Does SecureAI support SOC 2, GDPR, or HIPAA? -- compliance framework details
- How do we manage AI safety? -- content filtering and safety configuration
- Can admins see user chats? -- conversation privacy and admin access controls
- How SecureAI Handles Your Data -- data flow and retention policies
getting-started
How do I access SecureAI on mobile?
SecureAI works in any modern mobile browser. There is no separate app to install.
Getting started on mobile
- Open your mobile browser (Safari, Chrome, Firefox, or Edge).
- Navigate to your organization's SecureAI URL (e.g.,
yourcompany.secureai.example.com). - Log in with your usual credentials (email/password or SSO).
That's it — you have the same access to conversations, assistants, and knowledge bases as you do on desktop.
Add SecureAI to your home screen
For a more app-like experience, you can add SecureAI to your device's home screen:
iPhone / iPad (Safari)
- Open SecureAI in Safari.
- Tap the Share button (the square with an upward arrow).
- Scroll down and tap Add to Home Screen.
- Tap Add.
Android (Chrome)
- Open SecureAI in Chrome.
- Tap the three-dot menu in the top right.
- Tap Add to Home Screen (or Install app).
- Tap Add.
Once added, SecureAI appears as an icon on your home screen and opens in a full-screen window without the browser toolbar.
Mobile interface differences
The mobile interface is the same as desktop with a few layout adjustments:
- The sidebar is hidden by default. Tap the menu icon (three horizontal lines) in the top left to open it.
- Conversations, assistants, and settings are all accessible from the sidebar.
- File uploads work through your device's standard file picker or camera.
Troubleshooting
| Issue | Solution |
|---|---|
| Page does not load | Verify you are using the correct URL. Check with your administrator if unsure. |
| Cannot log in | See I can't log in for troubleshooting steps. |
| Blocked on cellular data | Your organization may use IP allowlisting. Contact your administrator to add your network or use your corporate VPN. |
| Slow performance | Close unused browser tabs. SecureAI works best on recent browser versions — update your browser if possible. |
How do I create my SecureAI account?
You do not create a SecureAI account yourself. An administrator at your organization provisions your access. The process depends on how your organization has configured authentication.
SSO (Single Sign-On) users
If your organization uses SAML or OIDC single sign-on, your account is created automatically the first time you sign in:
- Navigate to your organization's SecureAI URL (e.g.,
yourcompany.secureai.example.com). - Click Sign in with SSO (or your identity provider's name).
- Authenticate through your identity provider (Okta, Azure AD, Google Workspace, etc.).
- SecureAI creates your account on first login using the details from your identity provider.
No separate registration step is needed. If you see an error during your first sign-in, confirm with your administrator that your identity provider is configured and that you are in an authorized group.
Local account users
If your organization uses local (email/password) accounts, your administrator creates your account:
- Your administrator adds you in the Admin Panel > Settings > Users section.
- You receive an email invitation with a link to set your password.
- Click the link, create a password, and sign in.
If you did not receive an invitation email, check your spam or junk folder. If it is not there, ask your administrator to resend the invitation.
What if I don't know my organization's SecureAI URL?
Contact your IT department or the person who told you about SecureAI. Each organization has its own SecureAI instance with a unique URL.
What if I need an account but my organization doesn't have SecureAI yet?
SecureAI is sold to organizations, not individuals. If your organization is interested in SecureAI, contact your account representative or visit the SecureAI website to request a demo.
How do I reset my password?
You can reset your SecureAI password from the login page or from within your account settings.
Reset from the login page
If you cannot log in because you have forgotten your password:
- Go to your organization's SecureAI login page.
- Click the Forgot Password link below the password field.
- Enter the email address associated with your account.
- Check your email for a password reset link. If it does not arrive within a few minutes, check your spam or junk folder.
- Click the link in the email and enter your new password.
The reset link expires after 24 hours. If it has expired, repeat the steps above to request a new one.
Reset from account settings
If you are already logged in and want to change your password:
- Click your profile icon in the bottom-left corner of the sidebar.
- Select Settings.
- Under Account, click Change Password.
- Enter your current password, then enter and confirm your new password.
- Click Save.
Password requirements
SecureAI passwords must meet your organization's security policy. Common requirements include:
- Minimum 8 characters
- At least one uppercase letter, one lowercase letter, and one number
- No reuse of your last 5 passwords
Your administrator may enforce stricter requirements. If your new password is rejected, check with your administrator for your organization's specific password policy.
SSO users
If your organization uses Single Sign-On (SSO) through Google, Microsoft, Okta, or Auth0, you do not have a separate SecureAI password. Your login is managed by your identity provider. To reset your credentials, follow your organization's SSO password reset process instead.
Still having trouble?
If you do not receive the reset email or continue to have issues:
- Confirm you are using the correct email address for your account.
- Ask your administrator to verify your account is active in Settings > Users in the admin panel.
- Contact support with your email address, organization name, and a description of the issue.
Related articles
- I can't log in. What should I do? -- step-by-step login troubleshooting
- Does SecureAI support Google, Microsoft, Okta, or Auth0 login? -- SSO provider details
- Can we enforce MFA? -- multi-factor authentication for SecureAI
How is SecureAI different from ChatGPT?
SecureAI and ChatGPT are both AI chat tools, but they are built for different audiences. ChatGPT is a consumer product from OpenAI. SecureAI is an enterprise platform designed for organizations that need control over data, users, and compliance.
Key differences
| Area | SecureAI | ChatGPT (Plus / Team / Enterprise) |
|---|---|---|
| Data privacy | Your data stays within your organization's environment. Conversations are never used to train AI models. | Consumer and Plus plans may use data for model training unless opted out. Enterprise plans offer stronger data controls. |
| Deployment | Hosted in your organization's own cloud environment with dedicated infrastructure. | Shared multi-tenant infrastructure managed by OpenAI. |
| User management | Full admin panel with SSO (SAML, OIDC), role-based access, and seat management. | Team and Enterprise plans offer workspace-level management. Consumer plans have no admin controls. |
| Model access | Access to multiple AI models from different providers, configurable by your administrator. | Access to OpenAI models only (GPT-4o, GPT-4, etc.). |
| Knowledge bases | Upload and connect internal documents so assistants can answer questions using your organization's data. | Custom GPTs can reference uploaded files, but without enterprise-grade document management. |
| Integrations | Connect to Slack, Microsoft 365, Google Drive, and internal APIs with admin-controlled permissions. | Limited integrations through ChatGPT plugins and GPT Actions. |
| Compliance | Built for SOC 2, GDPR, and HIPAA requirements. See Does SecureAI support SOC 2, GDPR, or HIPAA? | Enterprise plan offers SOC 2 compliance. Consumer plans have limited compliance support. |
| Audit and visibility | Administrators can monitor usage, manage conversations, and enforce policies. | Enterprise plan offers an admin console. Consumer and Plus plans have no audit capabilities. |
When to use SecureAI
SecureAI is the right choice when your organization needs:
- Data isolation -- conversations and documents must not leave your environment.
- Admin controls -- IT needs to manage who can access AI, which models are available, and what integrations are enabled.
- Internal knowledge -- employees need AI that can answer questions using your organization's own documents and data.
- Compliance requirements -- your industry or customers require SOC 2, GDPR, or HIPAA controls around AI usage.
Can I use both?
Yes. Some organizations use ChatGPT for personal productivity and SecureAI for work that involves company data. Your administrator may set policies about when to use each tool. When in doubt, use SecureAI for anything involving internal information.
Is SecureAI just OpenWebUI?
No. SecureAI is built on top of OpenWebUI but adds a substantial layer of enterprise features designed for organizations that need security, compliance, and administrative control over their AI deployment.
What is OpenWebUI?
OpenWebUI is an open-source web interface for interacting with large language models. It provides a chat-based UI, conversation history, model selection, and basic knowledge-base (RAG) functionality. It is a solid foundation for individual use or small teams experimenting with AI.
What SecureAI adds
SecureAI extends OpenWebUI with capabilities that organizations in regulated industries — like the automotive aftermarket — require before they can deploy AI to their teams.
Enterprise authentication and access control
- Single sign-on via SAML and OIDC (Okta, Azure AD, Google Workspace, Auth0)
- Multi-factor authentication enforcement across the organization
- Role-based access control with granular permissions for users, teams, and administrators
- IP allowlisting to restrict access to approved networks
OpenWebUI provides basic local accounts. SecureAI connects to your existing identity provider so users log in with the same credentials they use for everything else.
Content filtering and AI safety
- Prompt-side and response-side content filters with adjustable sensitivity
- Prompt injection detection to prevent users from overriding system instructions
- Custom keyword and regex rules for industry-specific terms (e.g., blocking the model from guessing part numbers)
- Organization-wide system prompts that enforce baseline behavior across all models
OpenWebUI does not include content filtering. SecureAI lets administrators control what the AI can say before it reaches end users.
Compliance and audit
- Full audit logging of conversations, admin actions, content filter events, and login activity
- Audit log export (CSV) for SOC 2, GDPR, and HIPAA compliance reporting
- Data residency controls for organizations with geographic data requirements
- Data deletion workflows for responding to user data requests
OpenWebUI stores conversations but does not provide the audit trail or data governance features that compliance frameworks require.
Administration and governance
- Centralized admin panel for managing users, models, integrations, and safety settings
- Per-model configuration including rate limits, token budgets, and safety overrides
- Admin visibility into user conversations (with audit logging of admin access)
- Integration approval workflows so administrators control which external tools and connections are available
Managed infrastructure
- Hosted deployment on dedicated infrastructure — no self-hosting burden
- Automatic updates with zero-downtime rollouts
- Support and SLA with a dedicated account team
When does the distinction matter?
If you are evaluating SecureAI, the key question is whether your organization needs the controls listed above. For a single user or a small team with no compliance requirements, open-source OpenWebUI may be sufficient. For organizations that need to manage who can access AI, what the AI can say, and how activity is audited, SecureAI provides the layer that makes that possible.
Related articles
- How do we manage AI safety? -- content filtering, prompt injection protection, and safety controls
- Does SecureAI support SOC 2, GDPR, or HIPAA? -- compliance framework support
- Does SecureAI support Google, Microsoft, Okta, or Auth0 login? -- SSO configuration
- Can admins restrict models and integrations? -- model and integration governance
What are workspaces, models, tools, and knowledge bases?
SecureAI is built around five core concepts. Understanding how they fit together helps you get the most out of the platform.
Workspaces
A workspace is a shared environment that your administrator sets up for a team or department. It groups together specific models, knowledge bases, and default settings so everyone on the team has access to the same resources.
For example, a parts counter team might have a workspace with models and catalogs configured for parts lookup, while an estimating team has a separate workspace with collision repair procedures.
If your organization uses multiple workspaces, you can switch between them from the sidebar. If you only see one workspace (or no workspace selector), your administrator has configured a single workspace for your organization.
Models
A model is the AI engine that reads your messages and generates responses. SecureAI supports multiple models from different providers -- OpenAI, Anthropic, Google, and others -- each with different strengths.
| Model trait | What it means |
|---|---|
| Capability | Some models are better at reasoning, writing, or code. Others are faster but less detailed. |
| Speed | Smaller models respond quickly. Larger models take longer but produce more thorough answers. |
| Cost | Each message uses tokens. Larger models cost more tokens per response. |
Your administrator controls which models are available in each workspace. You select the model for your conversation from the model selector at the top of the chat area. You can also set a default model in Settings > Interface Preferences so you don't have to choose every time.
For help choosing a model, see How to Choose the Right AI Model.
Knowledge bases
A knowledge base is a collection of documents you upload to SecureAI. When a knowledge base is attached to a conversation or assistant, SecureAI searches those documents and uses the most relevant passages to answer your questions. This is called RAG (Retrieval-Augmented Generation).
Common examples in the automotive aftermarket:
- Parts catalogs -- upload your aftermarket parts PDFs so the AI can look up part numbers, fitment, and pricing.
- Service procedures -- upload OEM or shop-specific repair procedures for quick reference during diagnostics.
- Warranty policies -- upload warranty documentation so the AI can answer coverage questions accurately.
You can attach one or more knowledge bases to any conversation by clicking the + icon in the message input area. Administrators can also pre-attach knowledge bases to assistants so they are always available.
For details on creating and managing knowledge bases, see How to Create and Manage Knowledge Bases.
Tools
A tool is a custom function that lets an assistant call an external API or service during a conversation. Tools give the AI access to live data that isn't in its training or your knowledge bases.
For example, an administrator could create a tool that queries your parts inventory system. When you ask "Do we have part number 12345 in stock?", the assistant calls the tool, gets the current stock level, and includes it in its answer -- all without you leaving the chat.
Tools are created and managed by administrators. As a user, you don't need to do anything special -- assistants with tools assigned to them use those tools automatically based on your questions.
For more on what tools can do, see Can assistants call APIs and use tools?.
Assistants
An assistant is a pre-configured AI setup that combines a model, a system prompt, knowledge bases, and tools into a single package. Think of it as a specialized version of the AI tailored for a specific job.
For example, your organization might have:
- A Parts Lookup Assistant that uses a fast model, has your parts catalogs attached as knowledge bases, and has inventory lookup tools enabled.
- A Service Advisor Assistant that uses a more capable model with repair procedure knowledge bases and a shop management system integration.
- A General Assistant with a broad-purpose model and no specialized knowledge bases, for ad-hoc questions.
Assistants appear in the model selector alongside regular models. Select one to start a conversation with that pre-configured setup.
Administrators create assistants in Workspace > Models. For setup instructions, see How to Create an Assistant.
How they fit together
Workspace
+-- Models (which AI engines are available)
+-- Knowledge Bases (which documents the AI can search)
+-- Assistants (pre-configured combos of model + knowledge + tools)
+-- Model (one AI engine)
+-- Knowledge Bases (one or more document collections)
+-- Tools (optional external API integrations)
+-- System Prompt (instructions that shape behavior)
A workspace provides the environment. Models provide the intelligence. Knowledge bases provide your organization's data. Tools provide access to live external systems. Assistants bundle these together into ready-to-use configurations.
Related articles
- SecureAI Interface Tour -- walkthrough of the full interface
- How to Choose the Right AI Model -- model selection guidance
- How to Create and Manage Knowledge Bases -- creating and populating knowledge bases
- Can assistants call APIs and use tools? -- tool capabilities and setup
- How to Create an Assistant -- assistant configuration guide
- How does RAG work in SecureAI? -- technical details on document search
What is SecureAI?
SecureAI is a hosted AI chat platform built for organizations that need enterprise-grade security, compliance, and administrative control. It gives your team access to leading AI models -- like OpenAI, Anthropic, and Google -- through a single interface, with guardrails that protect your data and govern what the AI can do.
SecureAI is designed specifically for the automotive aftermarket industry, though its capabilities apply to any organization that needs to deploy AI responsibly.
What you can do with SecureAI
SecureAI provides a web-based chat interface where your team can:
- Ask questions and get AI-generated answers using models like GPT-4, Claude, and Gemini -- all from one place, without needing separate subscriptions to each provider.
- Upload documents and build knowledge bases so the AI can answer questions grounded in your organization's own data (parts catalogs, service manuals, policies, training materials).
- Create custom assistants tailored to specific workflows -- for example, an assistant that helps service advisors look up part compatibility, or one that drafts customer communications.
- Use integrations and tools that connect the AI to external systems, enabling it to look up live data or trigger actions beyond simple chat.
For details on core concepts, see What are workspaces, models, tools, and knowledge bases?.
How SecureAI protects your organization
Unlike consumer AI tools, SecureAI is built around the principle that your organization's data stays under your control:
- No model training on your data -- enterprise API agreements with model providers explicitly prohibit using your prompts or responses for training.
- End-to-end encryption -- all data is encrypted in transit (TLS 1.2+) and at rest (AES-256).
- Single sign-on and MFA -- users authenticate through your existing identity provider (Google, Microsoft, Okta, Auth0) with optional multi-factor authentication.
- Role-based access control -- administrators control who can access which models, tools, and knowledge bases.
- Content filtering and AI safety -- configurable filters evaluate prompts and responses before they reach users, blocking harmful content, PII exposure, or industry-specific terms.
- Audit logging -- every conversation, admin action, and security event is recorded in an immutable audit trail that can be exported for compliance reporting.
For a deeper look at security, see What does SecureAI mean by secure AI chat?.
How SecureAI is built
SecureAI is built on top of OpenWebUI, an open-source AI chat interface, and extends it with enterprise features:
| Layer | What SecureAI adds |
|---|---|
| Authentication | SSO (SAML/OIDC), MFA enforcement, IP allowlisting |
| Authorization | Role-based access control for models, tools, and knowledge bases |
| AI governance | Content filtering, prompt injection protection, organization-wide system prompts |
| Compliance | Audit logging, data export, data deletion workflows, SOC 2 / GDPR / HIPAA support |
| Infrastructure | Hosted deployment, automatic updates, dedicated support |
For details on the relationship with OpenWebUI, see Is SecureAI just OpenWebUI?.
Who SecureAI is for
SecureAI is designed for organizations that want to give their teams access to AI but need to maintain control over security, data handling, and AI behavior. Typical users include:
- Parts distributors and retailers using AI to help staff answer product questions, cross-reference part numbers, and draft customer communications.
- Service shops and dealer networks using knowledge bases loaded with service manuals, TSBs, and diagnostic procedures.
- Aftermarket manufacturers using AI to support internal teams with product specifications, warranty policies, and training content.
- Any regulated organization that needs compliance-ready AI deployment with audit trails and data governance.
Getting started
- Create your account -- see How do I create my SecureAI account?.
- Explore the interface -- learn about workspaces, models, tools, and knowledge bases.
- Set up your team -- configure SSO, add users, and assign roles through the admin panel.
- Build knowledge bases -- upload your organization's documents so the AI can answer questions using your data. See How should I structure knowledge bases?.
Related articles
- Is SecureAI just OpenWebUI? -- how SecureAI extends the open-source foundation
- What AI models are supported? -- available models and providers
- How is SecureAI different from ChatGPT? -- comparison with consumer AI tools
- What does SecureAI mean by secure AI chat? -- security architecture overview
- How is SecureAI billed? -- pricing and billing details
best-practices
How do we manage AI safety?
SecureAI provides a layered set of safety controls that let administrators manage what AI models can say, detect misuse, and enforce organizational policies -- without requiring deep AI expertise.
Content filtering
SecureAI evaluates both user prompts and model responses against your configured rules before anything is shown to end users. Administrators configure these in Admin Panel > Settings > Content & Safety.
The filtering pipeline runs in two stages:
- Prompt-side filters check user input before it reaches the model.
- Response-side filters check model output before it reaches the user.
Built-in content categories include harmful content, hate speech, PII exposure, financial advice, and legal advice. Each category has adjustable sensitivity thresholds (off, low, medium, high). You can also create custom keyword and regex rules for industry-specific terms -- for example, blocking the model from guessing part numbers or generating competitor pricing.
For full configuration details, see Content Filtering and Safety Settings.
Prompt injection protection
Prompt injection is when a user tries to override the system prompt or bypass safety instructions. SecureAI detects common injection patterns including direct overrides, role reassignment attempts, encoded bypasses, and delimiter injection.
Detection sensitivity can be set to low, medium, or high in Admin Panel > Settings > Content & Safety > Prompt Protection. Detected attempts are blocked and logged in the audit trail.
System prompt guardrails
Administrators can set organization-wide system prompts that define baseline behavior for all models. These guardrails persist across conversations and cannot be overridden by end users. Use system prompts to:
- Restrict the model to your domain (e.g., automotive aftermarket topics only)
- Require disclaimers on technical advice (e.g., warranty or safety-critical information)
- Enforce a consistent tone and response format
Per-model safety overrides let you apply different rules to different models -- for example, stricter filtering on a general-purpose model while relaxing rules on a model fine-tuned for your internal documentation.
Rate limiting
Rate limiting prevents individual users from consuming excessive resources or generating high volumes of unreviewed content. Options include:
- Requests per minute/hour -- caps the number of API calls per user
- Daily token budget -- limits total token consumption per user per day
Configure rate limits in Admin Panel > Settings > Rate Limiting.
Audit logging and review
All safety-related events are recorded in the audit log:
- Content filter matches (blocked and flagged)
- Prompt injection detection events
- Admin configuration changes to safety settings
- Admin access to user conversations
Export audit logs as CSV from Admin Panel > Audit Logs > Export for compliance reporting (SOC 2, GDPR, HIPAA).
Reducing hallucinations
AI models can generate plausible-sounding but incorrect information. To reduce hallucination risk:
- Use knowledge bases -- ground model responses in your organization's verified documents (parts catalogs, service manuals, technical bulletins).
- Set temperature low -- lower temperature values produce more deterministic, less creative responses.
- Instruct via system prompts -- tell the model to say "I don't know" rather than guess when it lacks information.
For detailed strategies, see How to Prevent AI Hallucinations.
Getting started
If you are new to AI safety configuration:
- Review the default filtering categories -- SecureAI ships with automotive-aftermarket-appropriate defaults enabled.
- Add custom rules for any organization-specific terms that should be blocked or flagged.
- Enable prompt injection protection at medium sensitivity.
- Set a system prompt that restricts the model to your business domain.
- Review the audit log weekly for the first month to tune sensitivity levels.
Related articles
- Content Filtering and Safety Settings -- full admin guide for configuring filters and safety policies
- How to Prevent AI Hallucinations -- strategies for grounding model responses
- How to Audit User Activity -- audit log review and compliance reporting
- Prompting Best Practices -- writing effective prompts for reliable results
- Does SecureAI support SOC 2, GDPR, or HIPAA? -- compliance framework support
How should I structure knowledge bases?
How you organize documents into knowledge bases directly affects the quality of AI responses. A well-structured knowledge base helps SecureAI retrieve the right information; a poorly structured one returns noise or misses relevant content entirely.
One topic per knowledge base
Group documents by subject area rather than dumping everything into a single knowledge base. When a knowledge base covers too many unrelated topics, search results become diluted -- a question about brake torque specs might pull in irrelevant chunks from HR policies or marketing materials.
Good examples for an automotive aftermarket organization:
| Knowledge base | What goes in it |
|---|---|
| Service Procedures | OEM service manuals, technical service bulletins, repair procedures |
| Parts Catalogs | Parts listings, fitment guides, cross-reference tables |
| Warranty Policies | Warranty terms, claim procedures, coverage matrices |
| Training Materials | Onboarding guides, certification study materials, how-to videos transcripts |
| Product Specs | Spec sheets, material safety data sheets, installation instructions |
This lets users (or assistant configurations) attach only the knowledge bases relevant to their question, which improves retrieval accuracy.
Keep documents focused and well-structured
The quality of individual documents matters as much as how you group them.
Use clear headings. SecureAI splits documents into chunks, and headings help the chunker create coherent sections. A document with no headings gets split at arbitrary points, producing chunks that mix unrelated information.
One topic per document. A 50-page PDF covering brake systems, electrical diagnostics, and transmission service will produce chunks that blend topics. Split it into separate documents -- one per system or procedure.
Remove noise before uploading. Strip cover pages, tables of contents, indexes, legal boilerplate, and repeated headers/footers. These create junk chunks that waste retrieval slots.
Use text-based formats when possible. PDFs with selectable text, Word documents, and Markdown files parse cleanly. Scanned PDFs without OCR, image-heavy documents, and complex multi-column layouts may not extract well.
Name documents clearly
Document names appear in source citations. When users see a citation like doc_final_v3_REVISED(2).pdf, they cannot judge whether the source is trustworthy. Use descriptive names:
2024-camry-front-brake-service-procedure.pdf-- clear and specificwarranty-claim-process-north-america-2025.pdf-- includes scope and datebrake-pad-cross-reference-aftermarket-to-oem.xlsx-- describes the content
Keep knowledge bases current
Outdated documents produce outdated answers. Establish a review cycle:
- Set a refresh schedule. Review each knowledge base quarterly or whenever source materials are updated (new model year, revised TSB, updated policy).
- Replace rather than duplicate. When a document is updated, delete the old version and upload the new one. Two versions of the same document create conflicting chunks that confuse retrieval.
- Check for staleness. If users report incorrect answers from a knowledge base, check whether the source documents are current.
Right-size your knowledge bases
| Problem | Symptom | Fix |
|---|---|---|
| Too many documents in one KB | Slow uploads, irrelevant results, mixed topics in responses | Split into topic-specific knowledge bases |
| Too few documents | Thin answers, frequent "I don't have information on that" | Consolidate related thin KBs or add more source material |
| Documents too long | Chunks blend multiple topics | Split into focused documents by topic or section |
| Documents too short | Chunks lack context | Combine related short documents or add supporting context |
Use assistants to scope knowledge base access
Rather than attaching all knowledge bases to every conversation, configure assistants with specific knowledge base assignments:
- A Parts Lookup Assistant gets the parts catalogs and cross-reference tables.
- A Service Advisor Assistant gets service procedures, warranty policies, and known issues.
- A Training Assistant gets onboarding and certification materials.
This scoping improves answer quality and prevents the model from pulling in irrelevant information. See Can assistants use multiple knowledge bases? for configuration details.
Checklist for a new knowledge base
- Define the topic scope -- what questions should this knowledge base answer?
- Gather source documents and remove noise (cover pages, TOCs, boilerplate).
- Split large multi-topic documents into focused single-topic files.
- Name files descriptively.
- Upload and test with representative questions.
- Review source citations in responses -- are the right chunks being retrieved?
- Adjust by removing low-quality documents or adding missing coverage.
Related articles
- How does RAG work in SecureAI? -- how document search and retrieval works under the hood
- Can assistants use multiple knowledge bases? -- attaching and managing multiple knowledge bases per assistant
- How to Create and Manage Knowledge Bases -- step-by-step guide for uploading documents and configuring knowledge bases
- How do we manage AI safety? -- safety controls and content filtering
What makes a good assistant?
A good assistant in SecureAI is one that consistently gives accurate, relevant answers for a specific job. The difference between a helpful assistant and a frustrating one comes down to how well it is configured -- its system prompt, knowledge base selection, model choice, and scope.
Give it a clear role
The most important thing you can do is write a focused system prompt that tells the assistant what it is, who it serves, and how it should behave.
Weak system prompt:
You are a helpful assistant.
Strong system prompt:
You are a parts counter advisor for an automotive aftermarket distributor. You help store employees look up part numbers, check fitment, and find alternatives when a part is out of stock. Always include the part number and application year/make/model in your answers. If you are unsure about fitment, say so rather than guessing.
A strong system prompt does three things:
- Defines the domain -- the assistant knows what kind of questions to expect.
- Sets behavioral rules -- it knows when to qualify answers or decline to guess.
- Specifies output format -- users get consistently structured responses.
Attach the right knowledge bases
An assistant is only as good as the information it can access. Attach knowledge bases that match the assistant's role -- and nothing more.
| Assistant role | Attach | Do not attach |
|---|---|---|
| Parts counter advisor | Parts catalogs, cross-reference tables, fitment guides | HR policies, marketing materials |
| Service writer | Service procedures, TSBs, warranty terms | Parts pricing, sales training |
| Training coach | Onboarding guides, certification materials | Customer-facing product specs |
Attaching irrelevant knowledge bases dilutes search results. The assistant retrieves chunks from all attached knowledge bases, so unrelated content competes with the information your users actually need. See How should I structure knowledge bases? for organizing documents effectively.
Choose the right model
Different models have different strengths. Match the model to the task:
- Faster, lighter models work well for straightforward lookups, FAQs, and structured data queries where speed matters more than nuance.
- Larger, more capable models are better for complex reasoning, multi-step analysis, and tasks that require synthesizing information across multiple documents.
If your assistant handles simple part number lookups, a fast model keeps response times low. If it needs to compare warranty coverage across multiple policy documents and reason about edge cases, a more capable model will produce better results.
Keep the scope narrow
An assistant that tries to do everything does nothing well. Build multiple focused assistants rather than one that covers every topic.
Instead of:
- One "Company Assistant" that answers parts questions, HR questions, warranty questions, and IT support questions
Build:
- A Parts Lookup Assistant with parts catalogs and fitment data
- A Warranty Advisor with warranty policies and claim procedures
- A New Hire Coach with onboarding and training materials
Focused assistants produce better answers because the model has less irrelevant context to sort through, and users know which assistant to pick for their question.
Write instructions the model can follow
System prompts work best when they are specific and actionable. Avoid vague instructions like "be professional" or "be thorough" -- the model interprets these differently than you might expect.
Effective instructions include:
- Output format: "Always respond with a bulleted list of matching parts, each including part number, price, and fitment notes."
- Guardrails: "If the user asks about a vehicle year/make/model not covered in the knowledge base, say you don't have data for that application rather than guessing."
- Tone guidance: "Use short, direct sentences. Avoid jargon that a new counter employee wouldn't know."
- Scope limits: "Only answer questions about brake and suspension components. For other product categories, direct the user to the appropriate assistant."
Test with real questions
Before rolling out an assistant to your team, test it with the actual questions your users ask. Good test questions include:
- Common lookups: "What brake pads fit a 2022 Toyota Camry?"
- Edge cases: "Is this part compatible with both the LE and SE trim?"
- Out-of-scope requests: "What's the company PTO policy?" (should the assistant decline?)
- Ambiguous queries: "I need brakes for a Camry" (does it ask for the year?)
Check that source citations point to the right documents. If the assistant pulls from the wrong knowledge base or cites irrelevant chunks, adjust the knowledge base assignments or refine the system prompt.
Review and iterate
A good assistant is not a set-and-forget configuration. Review it regularly:
- Monitor user feedback. If users report wrong answers, check whether the knowledge base is current and the system prompt covers the scenario.
- Update knowledge bases. New model years, revised TSBs, and updated catalogs mean the assistant's data needs refreshing. See How should I structure knowledge bases? for maintenance guidance.
- Refine the system prompt. As you see patterns in how users interact with the assistant, add instructions that address common failure modes.
Checklist for a new assistant
- Define the role -- what questions should this assistant answer?
- Write a specific system prompt with domain, behavior rules, and output format.
- Attach only the knowledge bases relevant to the role.
- Choose a model that matches the complexity of the task.
- Test with real user questions, including edge cases and out-of-scope requests.
- Review source citations to confirm the right documents are being retrieved.
- Deploy to a small group first, gather feedback, then roll out more broadly.
Related articles
- How should I structure knowledge bases? -- organizing documents for better retrieval
- Can assistants use multiple knowledge bases? -- attaching and managing knowledge bases per assistant
- Can assistants call APIs and use tools? -- extending assistants with live data and external services
- What are workspaces, models, tools, and knowledge bases? -- overview of SecureAI building blocks
enterprise-deployment
What infrastructure is required for SecureAI?
SecureAI is a fully managed service hosted on Google Cloud Platform. For most organizations, no on-premises infrastructure is required -- you access SecureAI through a web browser, and your SecureAI team handles hosting, scaling, and maintenance.
This FAQ covers the infrastructure requirements for both the standard hosted deployment and optional self-hosted or hybrid configurations.
Standard hosted deployment (recommended)
With the standard deployment, SecureAI runs entirely on managed infrastructure. Your organization needs:
- Modern web browser -- Chrome, Firefox, Edge, or Safari (latest two major versions). No desktop client or browser plugin is required.
- Internet connectivity -- users need HTTPS access (port 443) to your SecureAI instance URL. If your organization uses a web proxy or firewall, allow outbound traffic to
*.secureai.app(or your custom domain). - Identity provider (optional but recommended) -- if you use SSO, your IdP (Okta, Azure AD, Google Workspace, Auth0) must be reachable from SecureAI. See Does SecureAI support Google, Microsoft, Okta, or Auth0 login? for setup details.
There is no server, database, or container infrastructure to provision on your side.
Network and firewall requirements
Organizations with restrictive network policies should allow:
| Direction | Destination | Port | Purpose |
|---|---|---|---|
| Outbound | Your SecureAI instance URL | 443 (HTTPS) | User access to the application |
| Outbound | Your identity provider | 443 (HTTPS) | SSO authentication (if configured) |
| Inbound (optional) | Your SCIM endpoint | 443 (HTTPS) | Automated user provisioning (if configured) |
If you use IP allowlisting, configure your corporate egress IPs in the SecureAI admin panel to restrict access to approved networks.
Self-hosted model providers (optional)
Some organizations choose to run their own AI model providers (such as Ollama or vLLM) to keep prompts entirely within their network. If you use self-hosted models, you need:
- GPU-equipped server -- most LLMs require NVIDIA GPUs with sufficient VRAM. Requirements depend on the model:
- 7B parameter models: 1x NVIDIA A10 or equivalent (24 GB VRAM)
- 13B--30B parameter models: 1x NVIDIA A100 (40--80 GB VRAM)
- 70B+ parameter models: multiple A100s or H100s
- Model hosting software -- Ollama, vLLM, or a compatible OpenAI-format API server.
- Network path -- SecureAI must be able to reach your model server's API endpoint over HTTPS. If your model server is behind a VPN or private network, you will need a secure tunnel or VPN peering with your SecureAI instance.
For setup instructions, see Adding Custom Model Providers.
On-premises deployment (Enterprise plan)
For organizations that require full control over the deployment environment, SecureAI offers an on-premises option on Enterprise plans. On-premises deployment requires:
Compute
- Kubernetes cluster (v1.27+) -- EKS, GKE, AKS, or self-managed. Minimum 3 nodes.
- Per-node minimum: 4 vCPUs, 16 GB RAM, 100 GB SSD.
- Scaling: additional nodes for larger user bases. Your account team will provide sizing guidance based on expected concurrent users.
Storage
- PostgreSQL 15+ -- managed (RDS, Cloud SQL, Azure Database) or self-hosted. Stores conversations, user data, audit logs, and configuration.
- Object storage -- S3-compatible storage (AWS S3, GCS, MinIO) for uploaded documents and knowledge-base files.
- Persistent volumes -- for Kubernetes pod storage (standard StorageClass with ReadWriteOnce support).
Networking
- Load balancer -- Layer 7 (ALB, Ingress controller, or equivalent) with TLS termination.
- DNS -- a domain or subdomain pointed at the load balancer.
- TLS certificate -- for HTTPS. Bring your own certificate or use cert-manager with Let's Encrypt.
- Outbound internet (optional) -- required only if using cloud-hosted AI model providers (OpenAI, Anthropic, Google). Not needed if using exclusively self-hosted models.
GPU infrastructure (if running models locally)
If your on-premises deployment includes self-hosted model inference, provision GPU nodes in your Kubernetes cluster. See the "Self-hosted model providers" section above for GPU sizing.
Hybrid deployment
Some organizations use a hybrid approach:
- SecureAI application runs on managed infrastructure (standard hosted deployment).
- AI model inference runs on-premises or in a private cloud, keeping prompts within the corporate network.
This gives you managed infrastructure for the application layer while maintaining data residency for model interactions. Contact your account team to configure VPN peering or private connectivity between SecureAI and your model servers.
How to deploy internally
- Choose your deployment model -- hosted (default), on-premises, or hybrid. Most organizations start with hosted.
- Configure SSO -- connect your identity provider so users can log in with existing credentials. See How to Configure OIDC SSO or How to Configure SAML SSO.
- Set up network access -- update firewalls and proxy rules per the table above.
- Add model providers -- connect to cloud AI providers or your self-hosted models. See Adding Custom Model Providers.
- Provision users -- add users manually, via CSV import, or through SCIM provisioning. See User Management.
- Configure security policies -- set up content filtering, data retention, and access controls. See Content Filtering and Safety Settings and Configuring Data Retention Policies.
For on-premises deployments, your account team provides a deployment guide, Helm charts, and configuration templates specific to your infrastructure.
Related articles
- Is SecureAI just OpenWebUI? -- what SecureAI adds on top of OpenWebUI
- How is data encrypted in SecureAI? -- encryption and data residency
- Adding Custom Model Providers -- connecting cloud and self-hosted models
- How to Configure OIDC SSO -- single sign-on setup
- Setting Up IP Allowlisting -- restricting access to approved networks
release-notes
Where is the SecureAI roadmap?
SecureAI does not publish a fixed public roadmap. Instead, we share what is coming through release notes and product announcements so that you always have accurate, up-to-date information about new features and changes.
How to stay informed about upcoming features
Release notes
Every SecureAI update is documented in our release notes. Each entry describes what changed, why it matters, and how to use the new capability. You can find release notes in the Release Notes section of this support site.
In-app announcements
When a significant feature launches, SecureAI displays an in-app notification the next time you log in. These announcements link to the relevant release note or help article for full details.
Administrator notifications
Organization administrators receive email notifications for changes that affect security settings, billing, compliance, or available models. If you are not an admin and want advance notice, ask your administrator to share relevant updates with your team.
Why there is no public roadmap
Publishing a fixed roadmap creates expectations that are difficult to meet when priorities shift based on customer feedback, security requirements, or upstream AI provider changes. By sharing features only when they are ready, we ensure that every announcement reflects something you can use today -- not a promise about tomorrow.
How to request features
If there is a feature you would like to see in SecureAI:
- Submit a feature request through the support site or by contacting your account representative.
- Talk to your administrator -- admins can submit requests on behalf of their organization and have visibility into which requests are being evaluated.
- Check existing FAQs and articles -- the feature you want may already exist. Search this support site before submitting a request.
Feature requests are reviewed by the product team and prioritized based on customer demand, technical feasibility, and alignment with SecureAI's security-first approach.
Related articles
- What is SecureAI? -- overview of the platform
- How is SecureAI different from ChatGPT? -- what sets SecureAI apart
- What AI models are supported? -- currently available models
troubleshooting
Why can't I access my workspace?
If you see an error when trying to open a workspace, or your session seems to have expired, there are several common causes and fixes.
Your session expired
SecureAI sessions expire after a period of inactivity. When this happens you are returned to the login screen or see a "Session expired" message.
Fix: Log in again. Your conversations and workspace data are not lost -- they are stored on the server and will reappear once you authenticate.
Sessions may also expire if your administrator has configured a shorter session timeout for your organization. If you find yourself logged out frequently, ask your administrator whether the timeout can be extended.
You were removed from the workspace
Workspace administrators can add and remove members at any time. If you previously had access to a workspace but no longer see it in your workspace list, your administrator may have removed you.
Fix: Contact your workspace administrator and ask them to re-add you. Administrators manage workspace membership from the Admin Panel under Workspace > Members.
The workspace was deleted or renamed
If the workspace no longer exists, you will see an error when trying to access it via a saved link or bookmark.
Fix: Check the workspace list in the sidebar for the current set of available workspaces. If the workspace you need is missing, contact your administrator to confirm whether it was deleted or renamed.
Your account permissions changed
Your administrator may have changed your role from a level that had workspace access to one that does not. For example, moving from a standard user role to a restricted role may limit which workspaces you can see.
Fix: Ask your administrator to verify your role and permissions in Settings > Users.
Browser or cache issues
Stale browser data can sometimes cause workspace access errors, especially after a SecureAI update.
Fix: Try these steps in order:
- Hard-refresh the page (
Ctrl+Shift+Ron Windows/Linux,Cmd+Shift+Ron Mac). - Clear your browser cache and cookies for the SecureAI domain.
- Try a private/incognito window.
- Try a different browser.
Still stuck?
If none of the above resolves the issue, contact support with:
- Your email address and organization name
- The name of the workspace you are trying to access
- The exact error message or behavior you see (a screenshot helps)
- Whether the issue started suddenly or after a specific change
Related articles
- I can't log in. What should I do? -- general login troubleshooting
- What are workspaces, models, tools, and knowledge bases? -- overview of workspaces
- How do I create my SecureAI account? -- account setup
Why can't I upload a document?
If SecureAI rejects your file upload or the upload fails silently, work through these common causes in order.
1. Unsupported file type
SecureAI accepts a specific set of file formats: PDF, TXT, MD, DOCX, XLSX, PPTX, and CSV for documents, and JPEG, PNG, GIF, and WebP for images. If your file has a different extension, convert it to a supported format before uploading. See What file types and sizes are supported for upload? for the full list.
Common mistakes:
- Uploading
.doc(legacy Word) instead of.docx - Uploading
.xlsinstead of.xlsx - Uploading
.ziparchives -- extract the files first and upload them individually
2. File is too large
The maximum file size is 100 MB per file. If your file exceeds this limit, SecureAI will display an error and reject the upload. For large documents:
- Split multi-section PDFs into smaller files by chapter or product line
- Compress images before uploading if they are unusually large
- Export spreadsheets without embedded images to reduce file size
Files under 25 MB process fastest. See What file types and sizes are supported for upload? for recommended sizes.
3. Storage quota exceeded
Your organization may have a storage limit configured by your administrator. If you see a storage-related error, ask your administrator to check the current usage in the Admin Panel. They can free up space by removing outdated documents or increase the quota.
4. Browser or network issue
Upload failures can be caused by your browser or network connection:
- Slow or unstable connection: Large files may time out on slow connections. Try a wired connection or a more stable network.
- Browser cache: Clear your browser cache and cookies, then try again.
- Browser extensions: Ad blockers or privacy extensions can interfere with uploads. Try disabling extensions or using a private/incognito window.
- Outdated browser: Use the latest version of Chrome, Firefox, Edge, or Safari.
5. File is corrupted or empty
SecureAI cannot process files that are corrupted or contain no extractable content. Try opening the file locally to confirm it is not damaged. If a PDF opens but appears blank, it may be an image-only scan without an OCR text layer -- run it through OCR software and re-upload.
6. Permissions issue
You may not have permission to upload files in your current context:
- Knowledge bases: Only users with edit access to a knowledge base can upload documents to it. Ask the knowledge base owner or an administrator to grant you access.
- Workspace restrictions: Your workspace administrator may have restricted file uploads. Check with your administrator.
7. Upload stuck or processing
If your file appears to upload but never finishes processing:
- Wait a few minutes -- large files are indexed in the background and may take time.
- Refresh the page. If the file still shows as processing after several minutes, delete it and re-upload.
- If the problem persists, try uploading a smaller test file to confirm uploads are working in general.
Still not working?
If none of the above resolves your issue, contact support with:
- The file name and format you are trying to upload
- The exact error message (or a screenshot)
- The knowledge base or conversation where you are uploading
- Your browser and operating system
Related articles
- What file types and sizes are supported for upload? -- supported formats and size limits
- How should I structure knowledge bases? -- organizing documents for better retrieval
- How does RAG work in SecureAI? -- how uploaded documents are searched and used in responses
Why is indexing slow?
When you upload documents to a knowledge base, SecureAI parses, chunks, and creates embeddings for each file. This process -- called indexing -- runs in the background and must finish before documents become searchable. Several factors can make indexing take longer than expected.
Common causes of slow indexing
Large or complex files
| Factor | Impact |
|---|---|
| File size | Files over 25 MB take significantly longer to process. A 100 MB PDF may take several minutes to index compared to seconds for a small text file. |
| Page count | PDFs with more than 100 pages require proportionally more time for text extraction, chunking, and embedding. |
| Complex formatting | Documents with dense tables, nested columns, embedded images, or mixed layouts require more processing to extract clean text. |
| Scanned PDFs | If a scanned PDF includes an OCR text layer, the extracted text may be noisy, leading to more chunks and slower embedding. If it lacks an OCR layer entirely, indexing may appear to complete but the content will not be searchable. |
Bulk uploads
Uploading many documents at once queues them all for processing. Each document is indexed sequentially within a knowledge base. If you upload 200 files at once, the last file in the queue will not start indexing until the previous 199 have finished.
Embedding model load
Embedding generation is the most compute-intensive step. During periods of high usage across your organization -- for example, when multiple users are uploading documents simultaneously -- the embedding service may take longer to process each request.
How to check indexing status
After uploading documents to a knowledge base, you can monitor progress:
- Open the knowledge base in the SecureAI interface.
- Look for a progress indicator next to each document. Documents still being indexed will show a processing status.
- Once indexing completes, the document status changes to indicate it is ready and searchable.
Documents that are still indexing will not appear in search results. If a user asks a question and a relevant document was recently uploaded, the answer may be incomplete until indexing finishes.
How to speed up indexing
Split large files before uploading
Rather than uploading a single 500-page catalog, split it into smaller files by section or product line. Each smaller file indexes faster, and the first files become searchable while later ones are still processing.
Upload in batches
Instead of uploading your entire document library at once, upload in batches of 10-20 files. Wait for each batch to finish indexing before starting the next. This also makes it easier to spot problem files that stall the queue.
Optimize documents before upload
- Remove unnecessary pages. Strip cover pages, tables of contents, indexes, and legal boilerplate. These create low-value chunks that slow indexing and dilute search results.
- Use text-based formats. PDFs with selectable text, Word documents (.docx), Markdown, and CSV files parse faster than scanned images or complex PowerPoint files.
- Simplify formatting. Documents with clean headings and simple layouts extract faster than those with multi-column layouts, text boxes, or heavy graphical elements.
Avoid re-uploading during peak hours
If your organization has many concurrent users, schedule large uploads during off-peak hours when the embedding service has more capacity.
When indexing appears stuck
If a document has been in a processing state for an unusually long time (more than 15-20 minutes for a typical document):
- Check the file format. Unsupported or corrupted files may stall without a clear error. Try re-uploading the file or converting it to a different supported format.
- Check the file size. Files close to the 100 MB limit take the longest. Consider splitting the file into smaller parts.
- Remove and re-upload. Delete the stuck document from the knowledge base and upload it again. A transient processing error may have caused the stall.
- Contact your administrator. If multiple documents are stuck or the problem persists after re-uploading, there may be a system-level issue that requires administrator attention.
Related articles
- What file types and sizes are supported for upload? -- supported formats and size limits
- How does RAG work in SecureAI? -- how document search and retrieval works under the hood
- How should I structure knowledge bases? -- organizing documents for better retrieval
Why is the AI model not responding?
When you send a message in SecureAI and the model does not respond -- or the response hangs, times out, or returns an error -- there are several possible causes. Work through these checks in order.
1. Check the model status indicator
Look at the model selector in your chat interface. If the model shows a warning icon or "unavailable" label, the model's upstream provider may be experiencing an outage. SecureAI connects to external AI providers (such as OpenAI, Anthropic, or Google), and their availability is outside SecureAI's control.
What to do: Try selecting a different model from the model dropdown. If one provider is down, another model from a different provider may still be available.
2. Verify your message is not too long
Each model has a maximum context length measured in tokens. If your message -- combined with the conversation history and any attached knowledge base content -- exceeds this limit, the model may fail silently or return an error.
What to do:
- Start a new chat to clear the conversation history.
- Shorten your message or break it into smaller parts.
- If you are using an assistant with knowledge bases attached, the retrieved context adds to the token count. See What is a token and how is usage measured? for details.
3. Check your network connection
SecureAI requires an active internet connection to reach the AI model providers. If your network is down or a firewall is blocking outbound requests, the model cannot respond.
What to do:
- Confirm you can load other websites.
- If you are on a corporate network, check whether your IT team restricts access to AI provider endpoints.
- Try refreshing the page or switching to a different network.
4. Review your usage limits
Your organization's plan includes token usage limits. If your team has exhausted its allocation for the billing period, model requests may be throttled or blocked.
What to do: Ask your administrator to check usage in the Admin Panel. See How is SecureAI billed? for information on plan limits and overages.
5. Check for assistant configuration issues
If you are chatting through an assistant rather than a direct model chat, the assistant's configuration may be causing the problem:
- Invalid or removed model: The model assigned to the assistant may have been removed from your organization's available models by an administrator.
- Broken tool or function call: If the assistant uses tools (API integrations, function calling), a misconfigured tool can cause the response to fail. See Can assistants call APIs and use tools? for setup guidance.
- Overly long system prompt: A very long system prompt consumes tokens from the context window, leaving less room for the conversation. This can trigger context-length errors on shorter-context models.
What to do: Try chatting with the same model directly (without the assistant) to isolate whether the issue is with the model or the assistant configuration.
6. Clear your browser state
Occasionally, stale browser state can interfere with the chat interface.
What to do:
- Hard-refresh the page (Ctrl+Shift+R or Cmd+Shift+R).
- Clear your browser cache and cookies for the SecureAI domain.
- Try a private/incognito window.
- Try a different browser.
7. Contact support
If none of the above steps resolve the issue, contact support with:
- The model you were trying to use
- The approximate time the issue started
- Any error messages displayed (exact text or a screenshot)
- Whether the issue affects all models or only a specific one
- Whether other users in your organization are experiencing the same problem
Related articles
- What AI models are supported? -- available models and providers
- What is a token and how is usage measured? -- understanding token limits
- How is SecureAI billed? -- plan limits and usage tracking
- Can assistants call APIs and use tools? -- troubleshooting tool configurations