SecureAI runs on OpenWebUI, which supports connecting to multiple LLM providers simultaneously. This guide walks you through adding OpenAI, Anthropic, Azure OpenAI, and local model providers so your team has access to the right models for their work.
Prerequisites
Before you begin, ensure you have:
- Admin access to your SecureAI instance.
- API credentials for the provider(s) you want to add (API key, endpoint URL, etc.).
- An understanding of your organization's model usage policy — which providers are approved, budget limits, and any data residency requirements.
Supported Provider Types
SecureAI supports four categories of model providers:
| Provider Type | Examples | Connection Method | Data Leaves Your Network? |
|---|---|---|---|
| OpenAI-compatible | OpenAI, Groq, Together AI, Fireworks | API key + endpoint | Yes (sent to provider) |
| Anthropic | Claude models via Anthropic API | API key + endpoint | Yes (sent to Anthropic) |
| Azure OpenAI | GPT models via Azure | API key + Azure endpoint + deployment name | Yes (sent to your Azure tenant) |
| Local / self-hosted | Ollama, vLLM, llama.cpp, LocalAI | Endpoint URL (no API key needed if on same network) | No (stays on your infrastructure) |
Data residency note: For organizations with strict data policies, local models or Azure OpenAI (deployed in your own tenant) keep all data within your controlled infrastructure.
Adding an OpenAI-Compatible Provider
This covers OpenAI itself and any provider that uses the OpenAI API format (Groq, Together AI, Fireworks, Mistral, etc.).
Step 1: Get Your API Key
- Go to your provider's dashboard (e.g., platform.openai.com for OpenAI).
- Navigate to API Keys and generate a new key.
- Copy the key immediately — most providers only show it once.
Security: Store API keys securely. Never share them in chat, email, or commit them to version control.
Step 2: Configure the Provider in SecureAI
- Log in to SecureAI as an administrator.
- Navigate to Admin Panel > Settings > Connections.
- Under OpenAI API, click Add Connection (or edit the existing one).
- Fill in the following fields:
| Field | Value | Notes |
|---|---|---|
| API Base URL | https://api.openai.com/v1 |
Change for non-OpenAI providers (see table below) |
| API Key | Your API key | Stored encrypted in SecureAI's database |
- Click Save.
Common OpenAI-Compatible Endpoints
| Provider | API Base URL |
|---|---|
| OpenAI | https://api.openai.com/v1 |
| Groq | https://api.groq.com/openai/v1 |
| Together AI | https://api.together.xyz/v1 |
| Fireworks | https://api.fireworks.ai/inference/v1 |
| Mistral | https://api.mistral.ai/v1 |
Step 3: Verify the Connection
- After saving, go to Admin Panel > Settings > Connections.
- Click Verify next to your new connection.
- SecureAI will attempt to list available models from the provider.
- If successful, you will see the available models listed. If not, check the troubleshooting section below.
Adding Anthropic as a Provider
Anthropic's Claude models use a different API format than OpenAI.
Step 1: Get Your API Key
- Go to console.anthropic.com.
- Navigate to API Keys and create a new key.
- Copy the key.
Step 2: Configure in SecureAI
- Navigate to Admin Panel > Settings > Connections.
- Under Anthropic API, click Add Connection.
- Fill in:
| Field | Value |
|---|---|
| API Base URL | https://api.anthropic.com |
| API Key | Your Anthropic API key |
- Click Save.
Step 3: Verify
Click Verify next to the Anthropic connection. You should see available Claude models (e.g., Claude Sonnet, Claude Opus, Claude Haiku).
Adding Azure OpenAI
Azure OpenAI requires a few additional configuration details because models are deployed to your own Azure tenant.
Step 1: Gather Your Azure Credentials
From the Azure Portal, you need:
| Credential | Where to Find It |
|---|---|
| Endpoint URL | Azure Portal > your OpenAI resource > Keys and Endpoint (e.g., https://your-resource.openai.azure.com/) |
| API Key | Same page — Key 1 or Key 2 |
| Deployment Name | Azure Portal > your OpenAI resource > Model Deployments |
| API Version | Use the latest stable version (e.g., 2024-06-01) |
Step 2: Configure in SecureAI
- Navigate to Admin Panel > Settings > Connections.
- Under OpenAI API, click Add Connection.
- Fill in:
| Field | Value |
|---|---|
| API Base URL | https://your-resource.openai.azure.com/openai/deployments/your-deployment-name/ |
| API Key | Your Azure OpenAI key |
Important: The API Base URL for Azure includes your resource name and deployment name. Replace
your-resourceandyour-deployment-namewith your actual values.
- Click Save.
Step 3: Verify
Click Verify to confirm SecureAI can reach your Azure deployment. If using private endpoints or VNet restrictions, ensure your SecureAI instance has network access to the Azure resource.
Adding Local or Self-Hosted Models
Local models run on your own infrastructure — no API key needed, and no data leaves your network. This is the preferred option for organizations with strict data residency requirements.
Option A: Ollama
Ollama is the most common way to run local models with OpenWebUI.
- Install Ollama on a server accessible from your SecureAI instance.
- Pull the models you want:
ollama pull llama3.1 ollama pull mistral ollama pull codellama - Ensure Ollama is running and accessible (default:
http://localhost:11434). - In SecureAI, navigate to Admin Panel > Settings > Connections.
- Under Ollama API, configure:
| Field | Value |
|---|---|
| API Base URL | http://your-ollama-host:11434 |
- Click Save and Verify.
Network note: If Ollama runs on a different host than SecureAI, ensure the firewall allows traffic on port 11434 and Ollama is configured to accept external connections (
OLLAMA_HOST=0.0.0.0).
Option B: vLLM or Other OpenAI-Compatible Servers
If you are running vLLM, llama.cpp server, or LocalAI:
- Start your model server and note the endpoint (e.g.,
http://your-server:8000/v1). - In SecureAI, navigate to Admin Panel > Settings > Connections.
- Under OpenAI API, add a connection with:
| Field | Value |
|---|---|
| API Base URL | http://your-server:8000/v1 |
| API Key | Leave blank or enter a placeholder if required |
- Click Save and Verify.
Managing Multiple Providers
You can connect multiple providers simultaneously. SecureAI aggregates models from all configured providers into a single model selector for users.
Setting Model Visibility
Not every model needs to be visible to every user. After adding providers:
- Navigate to Admin Panel > Settings > Models.
- You will see all available models from all connected providers.
- Use the Enabled toggle to show or hide models from the user-facing model selector.
- Optionally, set a Display Name to give models user-friendly names (e.g., "Fast Model" instead of "gpt-4o-mini").
Setting a Default Model
- Navigate to Admin Panel > Settings > Models.
- Select a model and click Set as Default.
- This model will be pre-selected for new conversations across your organization.
Choose a model that balances cost and capability for typical use cases. See the Model Comparison and Selection Guide for guidance.
Cost Considerations
| Provider Type | Billing Model |
|---|---|
| OpenAI / Anthropic | Per-token (input + output) |
| Azure OpenAI | Per-token or provisioned throughput |
| Local models | Infrastructure cost only (no per-token charges) |
Monitor usage in Admin Panel > Dashboard > Usage Statistics to track per-provider costs.
API Key Rotation
API keys should be rotated periodically for security.
- Generate a new key in your provider's dashboard.
- In SecureAI, navigate to Admin Panel > Settings > Connections.
- Update the API key field with the new key.
- Click Save and Verify to confirm the new key works.
- Revoke the old key in your provider's dashboard.
Tip: Schedule key rotation quarterly or according to your organization's security policy. Rotating keys does not interrupt active user sessions — the new key takes effect on the next API call.
Troubleshooting
"Connection failed" when verifying a provider
| Possible Cause | Resolution |
|---|---|
| Incorrect API key | Double-check the key in your provider's dashboard. Regenerate if necessary. |
| Wrong API Base URL | Verify the URL matches the provider's documentation. Check for trailing slashes. |
| Network restrictions | Ensure your SecureAI server can reach the provider's endpoint (check firewall, proxy, DNS). |
| Provider outage | Check the provider's status page (e.g., status.openai.com). |
Models not appearing after adding a provider
- Verify the connection shows a green status in Connections.
- Refresh the page.
- Check that models are not disabled in Admin Panel > Settings > Models.
- For Azure OpenAI, confirm the deployment name in the API Base URL matches an active deployment.
"Unauthorized" or "Invalid API key" errors
- Confirm the API key has not been revoked or expired.
- For Azure, ensure you are using the correct key (Key 1 or Key 2) and that the resource is active.
- For Anthropic, verify your account has an active billing plan.
Local model connection times out
- Verify the model server is running (
curl http://your-host:11434/api/tagsfor Ollama). - Check that the SecureAI server can reach the model server on the specified port.
- If using Docker, ensure containers are on the same network or the port is exposed.
Slow responses from a provider
- Check the provider's status page for degraded performance.
- For local models, verify the host has sufficient GPU/CPU resources. Monitor utilization during requests.
- Consider switching users to a faster provider for time-sensitive tasks.