Documentation Index
Fetch the complete documentation index at: https://ahvn.top/llms.txt
Use this file to discover all available pages before exploring further.
Pass OpenAI-compatible tools directly to chat or stream.
tools = [
{
"type": "function",
"function": {
"name": "lookup_user",
"description": "Look up a user by id",
"parameters": {
"type": "object",
"properties": {"user_id": {"type": "string"}},
"required": ["user_id"],
},
},
}
]
calls = llm.chat("Find user 42", tools=tools, include="tool_calls")
Tool calls are collected from streaming deltas and returned in the standard OpenAI tool_calls shape.
Structured output
Pass OpenAI-compatible response_format arguments directly:
data = llm.chat(
"Return JSON with keys name and score.",
response_format={"type": "json_object"},
include="structured",
)
Most providers can stream structured output through the normal path. When a model is known to be unreliable for streaming structured JSON, its model defaults can set structured_stream: false.
You can also force non-streaming structured calls from user code:
data = llm.chat(
"Return JSON with key ok=true.",
response_format={"type": "json_object"},
include="structured",
enforce_non_stream_structured=True,
)
For stream(..., enforce_non_stream_structured=True), HeavenBase performs one non-streaming request and yields a single projected chunk.