Skip to main content

Documentation Index

Fetch the complete documentation index at: https://ahvn.top/llms.txt

Use this file to discover all available pages before exploring further.

Gateways

The default gateway is openai. It uses the OpenAI Python SDK against the provider’s OpenAI-compatible endpoint, so no Portkey, LiteLLM, or Bifrost process is required. Use Portkey when you want gateway-level routing, observability, caching, or policy controls:
export PORTKEY_API_KEY="..."
llm = hb.LLM(model="ds-flash", provider="deepseek", gateway="portkey")
Other gateways:
  • litellm: sends provider-prefixed model IDs such as deepseek/deepseek-v4-flash.
  • bifrost: sends provider-prefixed model IDs and uses BIFROST_BASE_URL, defaulting to http://localhost:8081/v1.
  • mock: stays offline and uses the built-in mock adapter.
Temporary upstream limitations are raised explicitly: gateway="portkey" with provider="openrouter" for embeddings is blocked until Portkey Gateway support lands, and Bifrost image generation is blocked while upstream image support is unresolved. If a non-default gateway cannot be imported by the active environment, hb.LLM falls back to the openai gateway.

Client reuse

Every resolved LLMSpec can produce deterministic hash keys:
llm = hb.LLM(model="ds-flash", provider="deepseek")

spec_key = llm.spec.hash_key()
client_key = llm.spec.client_key()
hash_key() includes the resolved model, provider, gateway mode, request defaults, and materialized runtime values. client_key() includes only gateway client construction fields: gateway, API key, base URL, headers, timeout, and retries. The OpenAI-compatible adapters for openai, portkey, and bifrost keep an in-memory cache keyed by client_key(), so duplicated LLM instances and repeated calls reuse the SDK client. LiteLLM does not create an SDK client object in this layer, but the module import and runtime key path are compatible with the same resolution model.

Image generation

Use imagen for image generation responses:
image = hb.LLM(preset="imagen").imagen("A clean product render of a white ceramic mug")
The built-in image models are gpt-5-image-mini and gpt-5.4-image-2:
image = hb.LLM(model="gpt-image-2").imagen("A small product icon for HB")
Image responses are normalized into LLMImage objects when possible, while raw provider payloads remain available through include="raw". URL-backed LLMImage values fetch lazily only when converted to bytes, base64, a data URL, or saved. The fetch timeout is configured by heavenbase.llm.image_url_timeout. imagen accepts the same images= input formats as chat and stream for reference images:
reference = hb.LLMImage.from_any("./style-reference.png")
image = hb.LLM(preset="imagen").imagen("Apply this style to an HB mark", images=reference)

Custom OpenAI-compatible providers

Use the custom preset for a provider that speaks the OpenAI API but is not in the bundled model catalog:
llm = hb.LLM(
    preset="custom",
    base_url="http://localhost:9999/v1",
    model="third-party-model",
    api_key="optional-key",
)
The custom provider requires a runtime base_url and a concrete model.