← All Articles

Adding Custom Model Providers

admin beginner model-configuration providers api-keys administration openai anthropic azure local-models

SecureAI runs on OpenWebUI, which supports connecting to multiple LLM providers simultaneously. This guide walks you through adding OpenAI, Anthropic, Azure OpenAI, and local model providers so your team has access to the right models for their work.

Prerequisites

Before you begin, ensure you have:

Supported Provider Types

SecureAI supports four categories of model providers:

Provider Type Examples Connection Method Data Leaves Your Network?
OpenAI-compatible OpenAI, Groq, Together AI, Fireworks API key + endpoint Yes (sent to provider)
Anthropic Claude models via Anthropic API API key + endpoint Yes (sent to Anthropic)
Azure OpenAI GPT models via Azure API key + Azure endpoint + deployment name Yes (sent to your Azure tenant)
Local / self-hosted Ollama, vLLM, llama.cpp, LocalAI Endpoint URL (no API key needed if on same network) No (stays on your infrastructure)

Data residency note: For organizations with strict data policies, local models or Azure OpenAI (deployed in your own tenant) keep all data within your controlled infrastructure.

Adding an OpenAI-Compatible Provider

This covers OpenAI itself and any provider that uses the OpenAI API format (Groq, Together AI, Fireworks, Mistral, etc.).

Step 1: Get Your API Key

  1. Go to your provider's dashboard (e.g., platform.openai.com for OpenAI).
  2. Navigate to API Keys and generate a new key.
  3. Copy the key immediately — most providers only show it once.

Security: Store API keys securely. Never share them in chat, email, or commit them to version control.

Step 2: Configure the Provider in SecureAI

  1. Log in to SecureAI as an administrator.
  2. Navigate to Admin Panel > Settings > Connections.
  3. Under OpenAI API, click Add Connection (or edit the existing one).
  4. Fill in the following fields:
Field Value Notes
API Base URL https://api.openai.com/v1 Change for non-OpenAI providers (see table below)
API Key Your API key Stored encrypted in SecureAI's database
  1. Click Save.

Common OpenAI-Compatible Endpoints

Provider API Base URL
OpenAI https://api.openai.com/v1
Groq https://api.groq.com/openai/v1
Together AI https://api.together.xyz/v1
Fireworks https://api.fireworks.ai/inference/v1
Mistral https://api.mistral.ai/v1

Step 3: Verify the Connection

  1. After saving, go to Admin Panel > Settings > Connections.
  2. Click Verify next to your new connection.
  3. SecureAI will attempt to list available models from the provider.
  4. If successful, you will see the available models listed. If not, check the troubleshooting section below.

Adding Anthropic as a Provider

Anthropic's Claude models use a different API format than OpenAI.

Step 1: Get Your API Key

  1. Go to console.anthropic.com.
  2. Navigate to API Keys and create a new key.
  3. Copy the key.

Step 2: Configure in SecureAI

  1. Navigate to Admin Panel > Settings > Connections.
  2. Under Anthropic API, click Add Connection.
  3. Fill in:
Field Value
API Base URL https://api.anthropic.com
API Key Your Anthropic API key
  1. Click Save.

Step 3: Verify

Click Verify next to the Anthropic connection. You should see available Claude models (e.g., Claude Sonnet, Claude Opus, Claude Haiku).

Adding Azure OpenAI

Azure OpenAI requires a few additional configuration details because models are deployed to your own Azure tenant.

Step 1: Gather Your Azure Credentials

From the Azure Portal, you need:

Credential Where to Find It
Endpoint URL Azure Portal > your OpenAI resource > Keys and Endpoint (e.g., https://your-resource.openai.azure.com/)
API Key Same page — Key 1 or Key 2
Deployment Name Azure Portal > your OpenAI resource > Model Deployments
API Version Use the latest stable version (e.g., 2024-06-01)

Step 2: Configure in SecureAI

  1. Navigate to Admin Panel > Settings > Connections.
  2. Under OpenAI API, click Add Connection.
  3. Fill in:
Field Value
API Base URL https://your-resource.openai.azure.com/openai/deployments/your-deployment-name/
API Key Your Azure OpenAI key

Important: The API Base URL for Azure includes your resource name and deployment name. Replace your-resource and your-deployment-name with your actual values.

  1. Click Save.

Step 3: Verify

Click Verify to confirm SecureAI can reach your Azure deployment. If using private endpoints or VNet restrictions, ensure your SecureAI instance has network access to the Azure resource.

Adding Local or Self-Hosted Models

Local models run on your own infrastructure — no API key needed, and no data leaves your network. This is the preferred option for organizations with strict data residency requirements.

Option A: Ollama

Ollama is the most common way to run local models with OpenWebUI.

  1. Install Ollama on a server accessible from your SecureAI instance.
  2. Pull the models you want:
    ollama pull llama3.1
    ollama pull mistral
    ollama pull codellama
    
  3. Ensure Ollama is running and accessible (default: http://localhost:11434).
  4. In SecureAI, navigate to Admin Panel > Settings > Connections.
  5. Under Ollama API, configure:
Field Value
API Base URL http://your-ollama-host:11434
  1. Click Save and Verify.

Network note: If Ollama runs on a different host than SecureAI, ensure the firewall allows traffic on port 11434 and Ollama is configured to accept external connections (OLLAMA_HOST=0.0.0.0).

Option B: vLLM or Other OpenAI-Compatible Servers

If you are running vLLM, llama.cpp server, or LocalAI:

  1. Start your model server and note the endpoint (e.g., http://your-server:8000/v1).
  2. In SecureAI, navigate to Admin Panel > Settings > Connections.
  3. Under OpenAI API, add a connection with:
Field Value
API Base URL http://your-server:8000/v1
API Key Leave blank or enter a placeholder if required
  1. Click Save and Verify.

Managing Multiple Providers

You can connect multiple providers simultaneously. SecureAI aggregates models from all configured providers into a single model selector for users.

Setting Model Visibility

Not every model needs to be visible to every user. After adding providers:

  1. Navigate to Admin Panel > Settings > Models.
  2. You will see all available models from all connected providers.
  3. Use the Enabled toggle to show or hide models from the user-facing model selector.
  4. Optionally, set a Display Name to give models user-friendly names (e.g., "Fast Model" instead of "gpt-4o-mini").

Setting a Default Model

  1. Navigate to Admin Panel > Settings > Models.
  2. Select a model and click Set as Default.
  3. This model will be pre-selected for new conversations across your organization.

Choose a model that balances cost and capability for typical use cases. See the Model Comparison and Selection Guide for guidance.

Cost Considerations

Provider Type Billing Model
OpenAI / Anthropic Per-token (input + output)
Azure OpenAI Per-token or provisioned throughput
Local models Infrastructure cost only (no per-token charges)

Monitor usage in Admin Panel > Dashboard > Usage Statistics to track per-provider costs.

API Key Rotation

API keys should be rotated periodically for security.

  1. Generate a new key in your provider's dashboard.
  2. In SecureAI, navigate to Admin Panel > Settings > Connections.
  3. Update the API key field with the new key.
  4. Click Save and Verify to confirm the new key works.
  5. Revoke the old key in your provider's dashboard.

Tip: Schedule key rotation quarterly or according to your organization's security policy. Rotating keys does not interrupt active user sessions — the new key takes effect on the next API call.

Troubleshooting

"Connection failed" when verifying a provider

Possible Cause Resolution
Incorrect API key Double-check the key in your provider's dashboard. Regenerate if necessary.
Wrong API Base URL Verify the URL matches the provider's documentation. Check for trailing slashes.
Network restrictions Ensure your SecureAI server can reach the provider's endpoint (check firewall, proxy, DNS).
Provider outage Check the provider's status page (e.g., status.openai.com).

Models not appearing after adding a provider

  1. Verify the connection shows a green status in Connections.
  2. Refresh the page.
  3. Check that models are not disabled in Admin Panel > Settings > Models.
  4. For Azure OpenAI, confirm the deployment name in the API Base URL matches an active deployment.

"Unauthorized" or "Invalid API key" errors

Local model connection times out

Slow responses from a provider

Related Articles