cubepi.providers
AssistantMessage
class
Content
attribute
ImageContent
class
Message
attribute
MessageStream
class
MessageStream(self)
Model
class
ModelCost
class
OnPayloadCallback
attribute
Optional callback for inspecting/replacing provider payloads before sending. Return a dict to replace the payload, or None to keep unchanged.
OnResponseCallback
attribute
Optional callback invoked after an HTTP response is received.
Provider
class
ProviderResponse
class
ProviderResponse(self, status: int, headers: dict[str, str] = dict())
HTTP response metadata exposed to on_response callbacks.
StreamEvent
class
StreamOptions
class
Options bag for Provider.stream(), transparent to the agent loop.
TextContent
class
ThinkingBudgets
class
Token budgets for each thinking level.
ThinkingContent
class
ThinkingLevel
attribute
ToolCall
class
ToolDefinition
class
ToolResultMessage
class
Usage
class
UserMessage
class
adjust_max_tokens_for_thinking
function
adjust_max_tokens_for_thinking(base_max_tokens: int, model_max_tokens: int, reasoning_level: ThinkingLevel, custom_budgets: ThinkingBudgets | None = None) -> tuple[int, int]
Adjust max_tokens to reserve space for a thinking budget.
Given a base max_tokens (the desired output capacity), increases it to
accommodate the thinking budget while respecting the model's hard cap.
If the model cap is too small to fit both, the thinking budget is reduced
to leave at least min_output_tokens (1024) for output.
Returns
- A
(max_tokens, thinking_budget)tuple.
FauxProvider
class
FauxProvider(self, *, tokens_per_second: float | None = None, token_size_min: int = 3, token_size_max: int = 5)
faux_assistant_message
function
faux_assistant_message(content: str | FauxContentBlock | list[FauxContentBlock], *, stop_reason: str = 'stop', error_message: str | None = None) -> AssistantMessage
faux_text
function
faux_text(text: str) -> TextContent
faux_thinking
function
faux_thinking(thinking: str) -> ThinkingContent
faux_tool_call
function
faux_tool_call(name: str, arguments: dict[str, Any], *, id: str | None = None) -> ToolCall
THINKING_LEVELS
attribute
clamp_thinking_level
function
clamp_thinking_level(model: Model, level: ThinkingLevel) -> ThinkingLevel
Clamp level to the nearest supported level for model.
If level is already supported, return it unchanged. Otherwise search upward first (higher intensity), then downward, through the ordered level list to find the closest available level.
get_supported_thinking_levels
function
get_supported_thinking_levels(model: Model) -> list[ThinkingLevel]
Return the thinking levels supported by model.
- Non-reasoning models only support
["off"]. - For reasoning models, levels are filtered through the model's
thinking_level_map. A level mapped toNoneis unsupported."xhigh"is only included when it has an explicit (non-None) mapping. All other levels are included by default when the map omits them.
models_are_equal
function
models_are_equal(a: Model | None, b: Model | None) -> bool
Return True if a and b refer to the same model.
Comparison is by id and provider. Returns False when either
argument is None.