SecureAI gives you access to multiple AI models through your organization's OpenWebUI instance. Each model has different strengths — speed, accuracy, cost, and specialization. This guide helps you pick the right model for the task at hand so you get the best results without wasting time or budget.
Why Model Selection Matters
Not every question needs the most powerful model. A quick part number lookup does not need the same horsepower as a complex diagnostic analysis across multiple vehicle systems. Choosing the right model means:
- Faster answers when you use a lighter model for simple queries.
- Better accuracy when you use a more capable model for complex reasoning.
- Lower cost for your organization when you match model capability to task complexity.
Available Models at a Glance
Your administrator configures which models are available. The exact list depends on your organization's setup, but SecureAI deployments typically include models in these tiers:
| Tier | Typical Models | Speed | Reasoning | Best For |
|---|---|---|---|---|
| Fast | Smaller / distilled models | Very fast (1-3s) | Good for straightforward tasks | Part number lookups, simple Q&A, quick translations |
| Balanced | Mid-size models | Moderate (3-10s) | Strong general reasoning | Cross-referencing parts, interpreting compatibility data, summarizing documents |
| Advanced | Large frontier models | Slower (10-30s) | Deep analysis and multi-step reasoning | Complex diagnostics, multi-vehicle comparisons, analyzing uploaded PDFs with detailed fitment logic |
Your organization may label these differently in the model selector. Check with your admin if you are unsure which models are available to you.
How to Switch Models
In the OpenWebUI chat interface:
- Look for the model selector dropdown at the top of the conversation or in the message input area.
- Click it to see your available models.
- Select the model you want before sending your message.
You can switch models mid-conversation. The new model will use the conversation history as context, but its responses will reflect its own capabilities.
Choosing the Right Model by Task
Parts Lookup (Single Part, Known Vehicle)
Recommended tier: Fast
A straightforward lookup — you know the vehicle, you know the part, you just need the number.
What is the OEM oil filter for a 2023 Toyota Camry 2.5L?
The fast model handles catalog lookups well because the answer is a direct data retrieval, not a reasoning chain.
Cross-Referencing and Compatibility Checks
Recommended tier: Balanced
When you need the model to compare across brands or verify fitment across applications, the balanced tier gives you the reasoning depth to catch edge cases.
VIN: 1HGCV1F34PA123456
I need front brake pads. Show me the OEM part and cross-references
from Akebono, Bosch, and Wagner. Flag any fitment differences.
Complex Diagnostics and Multi-Step Analysis
Recommended tier: Advanced
Use the advanced model when the question requires chaining multiple pieces of information, analyzing uploaded documents, or reasoning about symptoms across vehicle systems.
I have a 2021 Ford F-150 3.5L EcoBoost with intermittent misfires on
cylinders 1 and 4 after cold starts. I've already replaced the spark
plugs and coil packs. What else should I check, and what parts would
I need for each possibility?
Uploading and Analyzing Documents
Recommended tier: Balanced or Advanced (depending on document complexity)
- Simple documents (parts lists, single-page invoices): Balanced is sufficient.
- Complex documents (multi-page catalogs, technical service bulletins, warranty claims with cross-references): Use Advanced.
[Upload: 15-page parts catalog PDF]
Find all brake components in this catalog that fit a 2022 RAM 1500
5.7L Hemi 4WD. List them with page numbers.
Summarizing Conversation History
Recommended tier: Fast or Balanced
If you want to summarize what you discussed in a long conversation (useful before handing off to a colleague), even the fast model does this well.
Summarize this conversation. List every part number mentioned,
the vehicle it was for, and whether we confirmed fitment.
Comparing Model Responses
OpenWebUI's model comparison feature lets you send the same prompt to multiple models side by side. This is useful when:
- You want to verify a critical part recommendation by checking it against a second model.
- You are evaluating which model tier gives acceptable results for a recurring task.
- You want to see speed and quality tradeoffs firsthand.
To use comparison mode:
- Select multiple models from the model selector (if your admin has enabled this feature).
- Type your prompt and send it.
- Review the responses side by side. Each response is labeled with its model name.
Tips for Getting the Most Out of Model Selection
- Start fast, escalate if needed. Try the fast model first. If the answer seems incomplete or the reasoning is weak, resend the same prompt to a balanced or advanced model.
- Stick with one model per conversation thread for consistency. Switching models mid-thread works but can sometimes produce slightly inconsistent follow-up responses because each model interprets conversation context differently.
- Check with your admin about cost. Some organizations set per-user or per-team budgets. Using advanced models for every query may hit limits faster.
- Use comparison mode for high-stakes parts orders. If you are ordering expensive or safety-critical parts, the few extra seconds to compare models is worth it.
- Tell the model what you need. If you need a fast, short answer, say so: "Give me just the part number, no explanation." This helps even advanced models respond quickly.
Common Questions
"Which model is the most accurate?"
Generally, larger advanced models produce the most accurate and detailed responses. But for simple factual lookups (part numbers, specifications), the accuracy difference between tiers is small. The gap widens for complex reasoning tasks.
"Does switching models lose my conversation context?"
No. OpenWebUI preserves the conversation history when you switch models. The new model reads the full thread. However, each model may interpret that history slightly differently.
"Can I set a default model so I do not have to choose every time?"
Yes. Go to your profile settings in OpenWebUI and set your preferred default model. You can still override it per conversation. Your admin may also set an organization-wide default.
"Why is the advanced model so much slower?"
Larger models process more parameters per response. The tradeoff is higher quality reasoning at the cost of latency. For time-sensitive counter work, use the fast tier and reserve the advanced tier for research and diagnostics.
Related Topics
- Connecting an external LLM provider (admin)
- Configuring model parameters (admin)
- Setting a default model for your organization (admin)
- Starting a new conversation
- Multi-model conversations