← Home

best-practices

Articles (6)

FAQs (3)

How do we manage AI safety?

SecureAI provides a layered set of safety controls that let administrators manage what AI models can say, detect misuse, and enforce organizational policies -- without requiring deep AI expertise.

Content filtering

SecureAI evaluates both user prompts and model responses against your configured rules before anything is shown to end users. Administrators configure these in Admin Panel > Settings > Content & Safety.

The filtering pipeline runs in two stages:

  1. Prompt-side filters check user input before it reaches the model.
  2. Response-side filters check model output before it reaches the user.

Built-in content categories include harmful content, hate speech, PII exposure, financial advice, and legal advice. Each category has adjustable sensitivity thresholds (off, low, medium, high). You can also create custom keyword and regex rules for industry-specific terms -- for example, blocking the model from guessing part numbers or generating competitor pricing.

For full configuration details, see Content Filtering and Safety Settings.

Prompt injection protection

Prompt injection is when a user tries to override the system prompt or bypass safety instructions. SecureAI detects common injection patterns including direct overrides, role reassignment attempts, encoded bypasses, and delimiter injection.

Detection sensitivity can be set to low, medium, or high in Admin Panel > Settings > Content & Safety > Prompt Protection. Detected attempts are blocked and logged in the audit trail.

System prompt guardrails

Administrators can set organization-wide system prompts that define baseline behavior for all models. These guardrails persist across conversations and cannot be overridden by end users. Use system prompts to:

  • Restrict the model to your domain (e.g., automotive aftermarket topics only)
  • Require disclaimers on technical advice (e.g., warranty or safety-critical information)
  • Enforce a consistent tone and response format

Per-model safety overrides let you apply different rules to different models -- for example, stricter filtering on a general-purpose model while relaxing rules on a model fine-tuned for your internal documentation.

Rate limiting

Rate limiting prevents individual users from consuming excessive resources or generating high volumes of unreviewed content. Options include:

  • Requests per minute/hour -- caps the number of API calls per user
  • Daily token budget -- limits total token consumption per user per day

Configure rate limits in Admin Panel > Settings > Rate Limiting.

Audit logging and review

All safety-related events are recorded in the audit log:

  • Content filter matches (blocked and flagged)
  • Prompt injection detection events
  • Admin configuration changes to safety settings
  • Admin access to user conversations

Export audit logs as CSV from Admin Panel > Audit Logs > Export for compliance reporting (SOC 2, GDPR, HIPAA).

Reducing hallucinations

AI models can generate plausible-sounding but incorrect information. To reduce hallucination risk:

  • Use knowledge bases -- ground model responses in your organization's verified documents (parts catalogs, service manuals, technical bulletins).
  • Set temperature low -- lower temperature values produce more deterministic, less creative responses.
  • Instruct via system prompts -- tell the model to say "I don't know" rather than guess when it lacks information.

For detailed strategies, see How to Prevent AI Hallucinations.

Getting started

If you are new to AI safety configuration:

  1. Review the default filtering categories -- SecureAI ships with automotive-aftermarket-appropriate defaults enabled.
  2. Add custom rules for any organization-specific terms that should be blocked or flagged.
  3. Enable prompt injection protection at medium sensitivity.
  4. Set a system prompt that restricts the model to your business domain.
  5. Review the audit log weekly for the first month to tune sensitivity levels.

Related articles

How should I structure knowledge bases?

How you organize documents into knowledge bases directly affects the quality of AI responses. A well-structured knowledge base helps SecureAI retrieve the right information; a poorly structured one returns noise or misses relevant content entirely.

One topic per knowledge base

Group documents by subject area rather than dumping everything into a single knowledge base. When a knowledge base covers too many unrelated topics, search results become diluted -- a question about brake torque specs might pull in irrelevant chunks from HR policies or marketing materials.

Good examples for an automotive aftermarket organization:

Knowledge base What goes in it
Service Procedures OEM service manuals, technical service bulletins, repair procedures
Parts Catalogs Parts listings, fitment guides, cross-reference tables
Warranty Policies Warranty terms, claim procedures, coverage matrices
Training Materials Onboarding guides, certification study materials, how-to videos transcripts
Product Specs Spec sheets, material safety data sheets, installation instructions

This lets users (or assistant configurations) attach only the knowledge bases relevant to their question, which improves retrieval accuracy.

Keep documents focused and well-structured

The quality of individual documents matters as much as how you group them.

Use clear headings. SecureAI splits documents into chunks, and headings help the chunker create coherent sections. A document with no headings gets split at arbitrary points, producing chunks that mix unrelated information.

One topic per document. A 50-page PDF covering brake systems, electrical diagnostics, and transmission service will produce chunks that blend topics. Split it into separate documents -- one per system or procedure.

Remove noise before uploading. Strip cover pages, tables of contents, indexes, legal boilerplate, and repeated headers/footers. These create junk chunks that waste retrieval slots.

Use text-based formats when possible. PDFs with selectable text, Word documents, and Markdown files parse cleanly. Scanned PDFs without OCR, image-heavy documents, and complex multi-column layouts may not extract well.

Name documents clearly

Document names appear in source citations. When users see a citation like doc_final_v3_REVISED(2).pdf, they cannot judge whether the source is trustworthy. Use descriptive names:

  • 2024-camry-front-brake-service-procedure.pdf -- clear and specific
  • warranty-claim-process-north-america-2025.pdf -- includes scope and date
  • brake-pad-cross-reference-aftermarket-to-oem.xlsx -- describes the content

Keep knowledge bases current

Outdated documents produce outdated answers. Establish a review cycle:

  1. Set a refresh schedule. Review each knowledge base quarterly or whenever source materials are updated (new model year, revised TSB, updated policy).
  2. Replace rather than duplicate. When a document is updated, delete the old version and upload the new one. Two versions of the same document create conflicting chunks that confuse retrieval.
  3. Check for staleness. If users report incorrect answers from a knowledge base, check whether the source documents are current.

Right-size your knowledge bases

Problem Symptom Fix
Too many documents in one KB Slow uploads, irrelevant results, mixed topics in responses Split into topic-specific knowledge bases
Too few documents Thin answers, frequent "I don't have information on that" Consolidate related thin KBs or add more source material
Documents too long Chunks blend multiple topics Split into focused documents by topic or section
Documents too short Chunks lack context Combine related short documents or add supporting context

Use assistants to scope knowledge base access

Rather than attaching all knowledge bases to every conversation, configure assistants with specific knowledge base assignments:

  • A Parts Lookup Assistant gets the parts catalogs and cross-reference tables.
  • A Service Advisor Assistant gets service procedures, warranty policies, and known issues.
  • A Training Assistant gets onboarding and certification materials.

This scoping improves answer quality and prevents the model from pulling in irrelevant information. See Can assistants use multiple knowledge bases? for configuration details.

Checklist for a new knowledge base

  1. Define the topic scope -- what questions should this knowledge base answer?
  2. Gather source documents and remove noise (cover pages, TOCs, boilerplate).
  3. Split large multi-topic documents into focused single-topic files.
  4. Name files descriptively.
  5. Upload and test with representative questions.
  6. Review source citations in responses -- are the right chunks being retrieved?
  7. Adjust by removing low-quality documents or adding missing coverage.

Related articles

What makes a good assistant?

A good assistant in SecureAI is one that consistently gives accurate, relevant answers for a specific job. The difference between a helpful assistant and a frustrating one comes down to how well it is configured -- its system prompt, knowledge base selection, model choice, and scope.

Give it a clear role

The most important thing you can do is write a focused system prompt that tells the assistant what it is, who it serves, and how it should behave.

Weak system prompt:

You are a helpful assistant.

Strong system prompt:

You are a parts counter advisor for an automotive aftermarket distributor. You help store employees look up part numbers, check fitment, and find alternatives when a part is out of stock. Always include the part number and application year/make/model in your answers. If you are unsure about fitment, say so rather than guessing.

A strong system prompt does three things:

  1. Defines the domain -- the assistant knows what kind of questions to expect.
  2. Sets behavioral rules -- it knows when to qualify answers or decline to guess.
  3. Specifies output format -- users get consistently structured responses.

Attach the right knowledge bases

An assistant is only as good as the information it can access. Attach knowledge bases that match the assistant's role -- and nothing more.

Assistant role Attach Do not attach
Parts counter advisor Parts catalogs, cross-reference tables, fitment guides HR policies, marketing materials
Service writer Service procedures, TSBs, warranty terms Parts pricing, sales training
Training coach Onboarding guides, certification materials Customer-facing product specs

Attaching irrelevant knowledge bases dilutes search results. The assistant retrieves chunks from all attached knowledge bases, so unrelated content competes with the information your users actually need. See How should I structure knowledge bases? for organizing documents effectively.

Choose the right model

Different models have different strengths. Match the model to the task:

  • Faster, lighter models work well for straightforward lookups, FAQs, and structured data queries where speed matters more than nuance.
  • Larger, more capable models are better for complex reasoning, multi-step analysis, and tasks that require synthesizing information across multiple documents.

If your assistant handles simple part number lookups, a fast model keeps response times low. If it needs to compare warranty coverage across multiple policy documents and reason about edge cases, a more capable model will produce better results.

Keep the scope narrow

An assistant that tries to do everything does nothing well. Build multiple focused assistants rather than one that covers every topic.

Instead of:

  • One "Company Assistant" that answers parts questions, HR questions, warranty questions, and IT support questions

Build:

  • A Parts Lookup Assistant with parts catalogs and fitment data
  • A Warranty Advisor with warranty policies and claim procedures
  • A New Hire Coach with onboarding and training materials

Focused assistants produce better answers because the model has less irrelevant context to sort through, and users know which assistant to pick for their question.

Write instructions the model can follow

System prompts work best when they are specific and actionable. Avoid vague instructions like "be professional" or "be thorough" -- the model interprets these differently than you might expect.

Effective instructions include:

  • Output format: "Always respond with a bulleted list of matching parts, each including part number, price, and fitment notes."
  • Guardrails: "If the user asks about a vehicle year/make/model not covered in the knowledge base, say you don't have data for that application rather than guessing."
  • Tone guidance: "Use short, direct sentences. Avoid jargon that a new counter employee wouldn't know."
  • Scope limits: "Only answer questions about brake and suspension components. For other product categories, direct the user to the appropriate assistant."

Test with real questions

Before rolling out an assistant to your team, test it with the actual questions your users ask. Good test questions include:

  • Common lookups: "What brake pads fit a 2022 Toyota Camry?"
  • Edge cases: "Is this part compatible with both the LE and SE trim?"
  • Out-of-scope requests: "What's the company PTO policy?" (should the assistant decline?)
  • Ambiguous queries: "I need brakes for a Camry" (does it ask for the year?)

Check that source citations point to the right documents. If the assistant pulls from the wrong knowledge base or cites irrelevant chunks, adjust the knowledge base assignments or refine the system prompt.

Review and iterate

A good assistant is not a set-and-forget configuration. Review it regularly:

  1. Monitor user feedback. If users report wrong answers, check whether the knowledge base is current and the system prompt covers the scenario.
  2. Update knowledge bases. New model years, revised TSBs, and updated catalogs mean the assistant's data needs refreshing. See How should I structure knowledge bases? for maintenance guidance.
  3. Refine the system prompt. As you see patterns in how users interact with the assistant, add instructions that address common failure modes.

Checklist for a new assistant

  1. Define the role -- what questions should this assistant answer?
  2. Write a specific system prompt with domain, behavior rules, and output format.
  3. Attach only the knowledge bases relevant to the role.
  4. Choose a model that matches the complexity of the task.
  5. Test with real user questions, including edge cases and out-of-scope requests.
  6. Review source citations to confirm the right documents are being retrieved.
  7. Deploy to a small group first, gather feedback, then roll out more broadly.

Related articles