Universal Model Support
Model-agnostic through LiteLLM — use any provider:
Supported Providers
# OpenAI
model_config = {"provider": "openai", "model": "gpt-4o"}
# Anthropic
model_config = {"provider": "anthropic", "model": "claude-3-5-sonnet-20241022"}
# Google Gemini
model_config = {"provider": "google", "model": "gemini-2.0-flash-exp"}
# Groq (Ultra-fast)
model_config = {"provider": "groq", "model": "llama-3.1-8b-instant"}
# DeepSeek
model_config = {"provider": "deepseek", "model": "deepseek-chat"}
# Mistral AI
model_config = {"provider": "mistral", "model": "mistral-7b-instruct"}
# Azure OpenAI
model_config = {"provider": "azure_openai", "model": "gpt-4o"}
# OpenRouter (200+ models)
model_config = {"provider": "openrouter", "model": "anthropic/claude-3.5-sonnet"}
# Ollama (Local)
model_config = {"provider": "ollama", "model": "llama3.1:8b", "ollama_host": "http://localhost:11434"}
Model Configuration Options
model_config = {
"provider": "openai",
"model": "gpt-4o",
"temperature": 0.7,
"max_tokens": 2000,
"top_p": 0.95
}
Azure OpenAI Configuration
model_config = {
"provider": "azureopenai",
"model": "gpt-4",
"azure_endpoint": "https://your-resource.openai.azure.com",
"azure_api_version": "2024-02-01"
}
Ollama Configuration (Local Models)
model_config = {
"provider": "ollama",
"model": "llama3.1:8b",
"ollama_host": "http://localhost:11434"
}
Environment Variables
# All providers use this single key
LLM_API_KEY=your_api_key
Provider Selection Guide
| Use Case | Recommended Provider |
|---|
| Complex reasoning | OpenAI (gpt-4o), Anthropic (claude-3.5) |
| Fast inference | Groq (llama-3.1-8b-instant) |
| Cost-effective | DeepSeek (deepseek-chat) |
| Privacy-sensitive | Ollama (runs locally) |
| Multi-model access | OpenRouter (200+ models) |
| Enterprise / Azure | Azure OpenAI |
Switch providers based on your needs — use cheaper models (Groq, DeepSeek) for simple tasks, powerful models (GPT-4o, Claude) for complex reasoning, and local models (Ollama) for privacy-sensitive applications.