Same LLM class, different method — embed() instead of oracle().
The LLM class provides an embed() method that returns a vector embedding for a given text, using the embedder preset (or any other configured preset with embedding capability).
1. Basic Usage
from ahvn.utils.llm import LLM
llm = LLM(preset="embedder")
vector = llm.embed("AgentHeaven is a framework for building agentic applications.")
print(len(vector)) # e.g., 768 for embeddinggemma
print(vector[:5]) # [0.0234, -0.0451, ...]
2. CLI
ahvn embed "Some text to embed"
3. Provider Configuration
The embedder preset controls which model is used. By default it runs EmbeddingGemma (a lightweight 300M-parameter Google model) on Ollama locally.
To switch providers:
# OpenAI
ahvn cfg set llm.presets.embedder.provider openai
ahvn cfg set llm.presets.embedder.model text-embedding-3-small # 1536 dims
# Voyage
ahvn cfg set llm.providers.voyage.backend voyage
ahvn cfg set llm.providers.voyage.api_key "pa-..."
ahvn cfg set llm.presets.embedder.provider voyage
ahvn cfg set llm.presets.embedder.model voyage-4 # 1024 dims
# Google
ahvn cfg set llm.presets.embedder.provider google
ahvn cfg set llm.presets.embedder.model gemini-embedding-001 # 3072 dims
# Ollama (local, default)
ahvn cfg set llm.presets.embedder.provider ollama
ahvn cfg set llm.presets.embedder.model embeddinggemma # 768 dims
4. Integration with Knowledge System
Embeddings are primarily used by VectorKLStore and VectorKLEngine for semantic search and retrieval:
from ahvn.utils.llm import LLM
from ahvn.klstore import VectorKLStore
embedder = LLM(preset="embedder")
vecstore = VectorKLStore(
collection="my_knowledge",
provider="lancedb",
uri="./data/lancedb",
embedder=embedder,
)
Further Exploration