跳到主要内容
版本:0.3(最新)

cubepi.providers

AssistantMessage

class

source

Content

attribute

source

ImageContent

class

source

Message

attribute

source

MessageStream

class

MessageStream(self)

source

Model

class

source

ModelCost

class

source

OnPayloadCallback

attribute

Optional callback for inspecting/replacing provider payloads before sending. Return a dict to replace the payload, or None to keep unchanged.

source

OnResponseCallback

attribute

Optional callback invoked after an HTTP response is received.

source

Provider

class

source

ProviderResponse

class

ProviderResponse(self, status: int, headers: dict[str, str] = dict())

HTTP response metadata exposed to on_response callbacks.

source

StreamEvent

class

source

StreamOptions

class

Options bag for Provider.stream(), transparent to the agent loop.

source

TextContent

class

source

ThinkingBudgets

class

Token budgets for each thinking level.

source

ThinkingContent

class

source

ThinkingLevel

attribute

source

ToolCall

class

source

ToolDefinition

class

source

ToolResultMessage

class

source

Usage

class

source

UserMessage

class

source

adjust_max_tokens_for_thinking

function

adjust_max_tokens_for_thinking(base_max_tokens: int, model_max_tokens: int, reasoning_level: ThinkingLevel, custom_budgets: ThinkingBudgets | None = None) -> tuple[int, int]

Adjust max_tokens to reserve space for a thinking budget.

Given a base max_tokens (the desired output capacity), increases it to accommodate the thinking budget while respecting the model's hard cap. If the model cap is too small to fit both, the thinking budget is reduced to leave at least min_output_tokens (1024) for output.

Returns

  • A (max_tokens, thinking_budget) tuple.

source

FauxProvider

class

FauxProvider(self, *, tokens_per_second: float | None = None, token_size_min: int = 3, token_size_max: int = 5)

source

faux_assistant_message

function

faux_assistant_message(content: str | FauxContentBlock | list[FauxContentBlock], *, stop_reason: str = 'stop', error_message: str | None = None) -> AssistantMessage

source

faux_text

function

faux_text(text: str) -> TextContent

source

faux_thinking

function

faux_thinking(thinking: str) -> ThinkingContent

source

faux_tool_call

function

faux_tool_call(name: str, arguments: dict[str, Any], *, id: str | None = None) -> ToolCall

source

THINKING_LEVELS

attribute

source

clamp_thinking_level

function

clamp_thinking_level(model: Model, level: ThinkingLevel) -> ThinkingLevel

Clamp level to the nearest supported level for model.

If level is already supported, return it unchanged. Otherwise search upward first (higher intensity), then downward, through the ordered level list to find the closest available level.

source

get_supported_thinking_levels

function

get_supported_thinking_levels(model: Model) -> list[ThinkingLevel]

Return the thinking levels supported by model.

  • Non-reasoning models only support ["off"].
  • For reasoning models, levels are filtered through the model's thinking_level_map. A level mapped to None is unsupported. "xhigh" is only included when it has an explicit (non-None) mapping. All other levels are included by default when the map omits them.

source

models_are_equal

function

models_are_equal(a: Model | None, b: Model | None) -> bool

Return True if a and b refer to the same model.

Comparison is by id and provider. Returns False when either argument is None.

source