API for interfacing with cloud-hosted models
BlackBoxModel
provides a standardized interface for interacting with models hosted via APIs like OpenAI, Anthropic, and Together.ai. This class handles authentication, retry logic, and batch processing for efficient model querying.
Parameter | Type | Default | Description |
---|---|---|---|
model_name | string | (Required) | Name of the model (e.g., “gpt-4o”, “claude-3-7-sonnet-20250219”) |
system_prompt | string | None | Default system prompt to use for all queries |
max_retries | int | 5 | Maximum number of retry attempts for failed queries |
retry_delay | float | 10.0 | Delay in seconds between retry attempts |
Sends a prompt to the model and retrieves the generated response.
Parameter | Type | Default | Description |
---|---|---|---|
prompt | string | (Required) | The text prompt to send to the model |
system_prompt | string | None | System prompt for this specific query |
temperature | float | 0 | Controls randomness (0.0 = deterministic, 1.0 = creative) |
max_tokens | int | 2048 | Maximum number of tokens to generate |
message_history | List[Dict] | [] | Message history for chat context |
A string containing the model’s response.
Sends multiple prompts to the model in parallel for efficiency.
Parameter | Type | Default | Description |
---|---|---|---|
prompts | List[string] | (Required) | List of text prompts to send to the model |
max_threads | int | 50 | Maximum number of parallel threads to use |
show_progress | bool | True | Whether to display a progress bar |
system_prompt | string | None | System prompt for all queries |
temperature | float | 0 | Controls randomness |
max_tokens | int | 2048 | Maximum number of tokens to generate |
message_history | List[Dict] | [] | Message history for chat context |
A list of strings containing the model’s responses in the same order as the input prompts.
Generates embeddings (vector representations) for text.
Parameter | Type | Default | Description |
---|---|---|---|
text | string or List[string] | (Required) | Text to generate embeddings for |
batch_size | int | 100 | Maximum batch size for processing |
A list of float values (embedding vector) if a single text is provided, or a list of embedding vectors (list of lists) if multiple texts are provided.
Generates embeddings for multiple texts in parallel.
Parameter | Type | Default | Description |
---|---|---|---|
texts | List[string] | (Required) | List of texts to generate embeddings for |
batch_size | int | 100 | Maximum batch size for processing |
max_threads | int | 10 | Maximum number of parallel threads to use |
show_progress | bool | True | Whether to display a progress bar |
A list of embedding vectors (list of lists of float values), one for each input text.
The BlackBoxModel
class automatically determines the appropriate API to use based on the model name:
The BlackBoxModel implements robust error handling:
API for interfacing with cloud-hosted models
BlackBoxModel
provides a standardized interface for interacting with models hosted via APIs like OpenAI, Anthropic, and Together.ai. This class handles authentication, retry logic, and batch processing for efficient model querying.
Parameter | Type | Default | Description |
---|---|---|---|
model_name | string | (Required) | Name of the model (e.g., “gpt-4o”, “claude-3-7-sonnet-20250219”) |
system_prompt | string | None | Default system prompt to use for all queries |
max_retries | int | 5 | Maximum number of retry attempts for failed queries |
retry_delay | float | 10.0 | Delay in seconds between retry attempts |
Sends a prompt to the model and retrieves the generated response.
Parameter | Type | Default | Description |
---|---|---|---|
prompt | string | (Required) | The text prompt to send to the model |
system_prompt | string | None | System prompt for this specific query |
temperature | float | 0 | Controls randomness (0.0 = deterministic, 1.0 = creative) |
max_tokens | int | 2048 | Maximum number of tokens to generate |
message_history | List[Dict] | [] | Message history for chat context |
A string containing the model’s response.
Sends multiple prompts to the model in parallel for efficiency.
Parameter | Type | Default | Description |
---|---|---|---|
prompts | List[string] | (Required) | List of text prompts to send to the model |
max_threads | int | 50 | Maximum number of parallel threads to use |
show_progress | bool | True | Whether to display a progress bar |
system_prompt | string | None | System prompt for all queries |
temperature | float | 0 | Controls randomness |
max_tokens | int | 2048 | Maximum number of tokens to generate |
message_history | List[Dict] | [] | Message history for chat context |
A list of strings containing the model’s responses in the same order as the input prompts.
Generates embeddings (vector representations) for text.
Parameter | Type | Default | Description |
---|---|---|---|
text | string or List[string] | (Required) | Text to generate embeddings for |
batch_size | int | 100 | Maximum batch size for processing |
A list of float values (embedding vector) if a single text is provided, or a list of embedding vectors (list of lists) if multiple texts are provided.
Generates embeddings for multiple texts in parallel.
Parameter | Type | Default | Description |
---|---|---|---|
texts | List[string] | (Required) | List of texts to generate embeddings for |
batch_size | int | 100 | Maximum batch size for processing |
max_threads | int | 10 | Maximum number of parallel threads to use |
show_progress | bool | True | Whether to display a progress bar |
A list of embedding vectors (list of lists of float values), one for each input text.
The BlackBoxModel
class automatically determines the appropriate API to use based on the model name:
The BlackBoxModel implements robust error handling: