Skip to main content
Model choices and provider choices never appear in your code — only logical preset names.
AgentHeaven routes all LLM calls through LiteLLM, supporting any provider it covers. Rather than hard-coding model names or API keys in your code, you define presets — logical roles like sys, chat, reason, embedder — and configure each preset to point at whichever provider and model you want via Configuration.
from ahvn.utils.llm import LLM

llm = LLM(preset="chat")      # uses whatever model/provider you configured for "chat"
answer = llm.oracle("Hello!")  # full response as a string
This design means your application code never changes when you swap from GPT to Claude to a local Ollama model — you just update the config.

Core Concepts

ConceptWhat it isExample
PresetLogical rolesys, chat, reason, embedder, coder, translator, tiny, local
ProviderBackend connectionopenrouter, openai, anthropic, gemini, deepseek, ollama, lmstudio, vllm
Model aliasShort name → full IDdsv3deepseek/deepseek-v3.2, sonnetclaude-sonnet-4-6
BackendLiteLLM routing prefixopenai/, anthropic/, ollama/, hosted_vllm/
See Quick Setup for how to configure providers and presets.

LLM Features

Chat & Streaming

oracle(), stream(), message formats, and per-call parameters.

Sessions

Multi-turn conversations, history, slash commands.

Embeddings

embed(), embedding providers, vector dimensions.

Tool Use

Function calling, tool schemas, exec_tool_calls.

Advanced

Image generation, include finetuning, custom backends.

Further Exploration

Related: