Skip to main content
AgentHeaven connects to LLM providers through LiteLLM, supporting a wide range of providers out of the box. See LLM for the core LLM abstraction and Quick Setup for provider configuration.

Supported Providers

ProviderBackend prefixChatEmbeddingLocalNotes
OpenRouteropenrouter/Default provider — routes to most major models
OpenAIopenai/GPT-5.4, text-embedding-3-small
Anthropicanthropic/Claude Opus 4.6, Claude Sonnet 4.6
Google Geminigemini/Gemini 3.0 Flash, gemini-embedding-001
DeepSeekdeepseek/DeepSeek V3, DeepSeek Reasoner
xAIxai/Grok 4.20
Z.AIzai/GLM-4.7, GLM-5
Moonshotmoonshot/Kimi K2.5
Voyagevoyage/voyage-4 (dedicated embedding provider)
Ollamaollama/Any model from Ollama library
LM Studiolm_studio/Local GUI for running models
vLLMhosted_vllm/High-throughput local serving

Adding a Custom Provider

Any provider supported by LiteLLM can be configured:
ahvn cfg set llm.providers.<name>.backend <litellm_prefix>
ahvn cfg set llm.providers.<name>.api_base "https://your-endpoint/v1"
ahvn cfg set llm.providers.<name>.api_key "..."
See LiteLLM Providers for the full list of supported backends and their required configuration.

Further Exploration

Related:
  • Quick Setup — configure provider API keys and presets
  • LLM — the core LLM abstraction
  • Embeddings — embedding-specific provider setup