Documentation Index
Fetch the complete documentation index at: https://ahvn.top/llms.txt
Use this file to discover all available pages before exploring further.
Gateways
The default gateway isopenai. It uses the OpenAI Python SDK against the provider’s OpenAI-compatible endpoint, so no Portkey, LiteLLM, or Bifrost process is required.
Use Portkey when you want gateway-level routing, observability, caching, or policy controls:
litellm: sends provider-prefixed model IDs such asdeepseek/deepseek-v4-flash.bifrost: sends provider-prefixed model IDs and usesBIFROST_BASE_URL, defaulting tohttp://localhost:8081/v1.mock: stays offline and uses the built-in mock adapter.
gateway="portkey" with provider="openrouter" for embeddings is blocked until Portkey Gateway support lands, and Bifrost image generation is blocked while upstream image support is unresolved.
If a non-default gateway cannot be imported by the active environment, hb.LLM falls back to the openai gateway.
Client reuse
Every resolvedLLMSpec can produce deterministic hash keys:
hash_key() includes the resolved model, provider, gateway mode, request defaults, and materialized runtime values. client_key() includes only gateway client construction fields: gateway, API key, base URL, headers, timeout, and retries.
The OpenAI-compatible adapters for openai, portkey, and bifrost keep an in-memory cache keyed by client_key(), so duplicated LLM instances and repeated calls reuse the SDK client. LiteLLM does not create an SDK client object in this layer, but the module import and runtime key path are compatible with the same resolution model.
Image generation
Useimagen for image generation responses:
gpt-5-image-mini and gpt-5.4-image-2:
LLMImage objects when possible, while raw provider payloads remain available through include="raw".
URL-backed LLMImage values fetch lazily only when converted to bytes, base64, a data URL, or saved. The fetch timeout is configured by heavenbase.llm.image_url_timeout.
imagen accepts the same images= input formats as chat and stream for reference images:
Custom OpenAI-compatible providers
Use thecustom preset for a provider that speaks the OpenAI API but is not in the bundled model catalog:
base_url and a concrete model.

