The Agentic Types module defines foundational data structures for agentic systems.
This module defines several important type definitions, such as ToolSpec and ChatMessage, which are designed to be "model-neutral" as much as possible, allowing developers to build agentic systems using different models.
Function
Bases: TypedDict
The function that the model called.
Source code in bridgic/core/agentic/types/_chat_message.py
| class Function(TypedDict, total=True):
"""The function that the model called."""
arguments: Required[str]
"""
The arguments to call the function with, as generated by the model in JSON
format. Note that the model does not always generate valid JSON, and may
hallucinate parameters not defined by your function schema. Validate the
arguments in your code before calling your function.
"""
name: Required[str]
"""The name of the function to call."""
|
arguments instance-attribute
The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function.
name instance-attribute
The name of the function to call.
Bases: TypedDict
A call to a function tool created by the model.
Source code in bridgic/core/agentic/types/_chat_message.py
| class FunctionToolCall(TypedDict, total=True):
"""A call to a function tool created by the model."""
id: Required[str]
"""The ID of the tool call."""
function: Required[Function]
"""The function that the model called."""
type: Required[Literal["function"]]
"""The type of the tool. Currently, only `function` is supported."""
|
The function that the model called.
type: Required[Literal['function']]
The type of the tool. Currently, only function is supported.
SystemMessage
Bases: TypedDict
Developer-provided instructions that the model should follow, regardless of messages sent by the user.
Source code in bridgic/core/agentic/types/_chat_message.py
| class SystemMessage(TypedDict, total=False):
"""Developer-provided instructions that the model should follow, regardless of messages sent by the user."""
role: Required[Literal["system"]]
"""The role of the messages author, in this case `system`."""
content: Required[str]
"""The contents of the system message, which is a text."""
|
role instance-attribute
role: Required[Literal['system']]
The role of the messages author, in this case system.
content instance-attribute
The contents of the system message, which is a text.
UserTextMessage
Bases: TypedDict
Messages sent by an end user, containing prompts.
Source code in bridgic/core/agentic/types/_chat_message.py
| class UserTextMessage(TypedDict, total=False):
"""Messages sent by an end user, containing prompts."""
role: Required[Literal["user"]]
"""The role of the messages author, in this case `user`."""
content: Required[str]
"""The content of the user message, which is a text."""
|
role instance-attribute
role: Required[Literal['user']]
The role of the messages author, in this case user.
content instance-attribute
The content of the user message, which is a text.
AssistantTextMessage
Bases: TypedDict
Messages sent by the model in response to user messages.
Source code in bridgic/core/agentic/types/_chat_message.py
| class AssistantTextMessage(TypedDict, total=False):
"""Messages sent by the model in response to user messages."""
role: Required[Literal["assistant"]]
"""The role of the messages author, in this case `assistant`."""
content: Optional[str]
"""The content of the assistant message, which is a text. Required unless `tool_calls` is specified."""
tool_calls: Optional[Iterable[FunctionToolCall]]
"""The tool calls generated by the model, such as function calls."""
|
role instance-attribute
role: Required[Literal['assistant']]
The role of the messages author, in this case assistant.
content instance-attribute
The content of the assistant message, which is a text. Required unless tool_calls is specified.
tool_calls instance-attribute
The tool calls generated by the model, such as function calls.
Bases: TypedDict
Messages generated by tools.
Source code in bridgic/core/agentic/types/_chat_message.py
| class ToolMessage(TypedDict, total=False):
"""Messages generated by tools."""
role: Required[Literal["tool"]]
"""The role of the messages author, in this case `tool`."""
content: Required[str]
"""The contents of the tool message."""
tool_call_id: Required[str]
"""Tool call that this message is responding to."""
|
role: Required[Literal['tool']]
The role of the messages author, in this case tool.
content instance-attribute
The contents of the tool message.
tool_call_id: Required[str]
Tool call that this message is responding to.
LlmTaskConfig
Bases: Serializable
Configuration for a single LLM task in an agentic system.
This class provides a generic abstraction for configuring LLM tasks with: - A dedicated LLM instance for the task - Optional system prompt template - Optional instruction prompt template
This class serves as a configuration holder and the actual behavior of the system are determined by the concrete implementations utilizing this configuration.
Attributes:
Source code in bridgic/core/agentic/types/_llm_task_config.py
| class LlmTaskConfig(Serializable):
"""
Configuration for a single LLM task in an agentic system.
This class provides a generic abstraction for configuring LLM tasks with:
- A dedicated LLM instance for the task
- Optional system prompt template
- Optional instruction prompt template
This class serves as a configuration holder and the actual behavior of the system are
determined by the concrete implementations utilizing this configuration.
Attributes
----------
llm : BaseLlm
The LLM instance to use for this task.
system_template : Optional[EjinjaPromptTemplate]
Optional system prompt template. If None, no system message will be added.
instruction_template : Optional[EjinjaPromptTemplate]
Optional instruction prompt template. If None, no instruction message will be added.
"""
llm: BaseLlm
"""The LLM instance to use for this task."""
system_template: Optional[EjinjaPromptTemplate]
"""Optional system prompt template for this task."""
instruction_template: Optional[EjinjaPromptTemplate]
"""Optional instruction prompt template for this task."""
def __init__(
self,
llm: BaseLlm,
system_template: Optional[Union[str, EjinjaPromptTemplate]] = None,
instruction_template: Optional[Union[str, EjinjaPromptTemplate]] = None,
):
"""
Initialize LLM task configuration.
Parameters
----------
llm : BaseLlm
The LLM instance to use for this task.
system_template : Optional[Union[str, EjinjaPromptTemplate]]
System prompt template. Can be a string (will be converted to EjinjaPromptTemplate)
or an EjinjaPromptTemplate instance. If None, no system message will be added.
instruction_template : Optional[Union[str, EjinjaPromptTemplate]]
Instruction prompt template. Can be a string (will be converted to EjinjaPromptTemplate)
or an EjinjaPromptTemplate instance. If None, no instruction message will be added.
"""
self.llm = llm
if system_template is None:
self.system_template = None
elif isinstance(system_template, str):
self.system_template = EjinjaPromptTemplate(system_template)
elif isinstance(system_template, EjinjaPromptTemplate):
self.system_template = system_template
else:
raise TypeError(
f"system_template must be str, EjinjaPromptTemplate, or None, "
f"got {type(system_template)}"
)
if instruction_template is None:
self.instruction_template = None
elif isinstance(instruction_template, str):
self.instruction_template = EjinjaPromptTemplate(instruction_template)
elif isinstance(instruction_template, EjinjaPromptTemplate):
self.instruction_template = instruction_template
else:
raise TypeError(
f"instruction_template must be str, EjinjaPromptTemplate, or None, "
f"got {type(instruction_template)}"
)
@override
def dump_to_dict(self) -> Dict[str, Any]:
state_dict = {}
state_dict["llm"] = self.llm
state_dict["system_template"] = self.system_template
state_dict["instruction_template"] = self.instruction_template
return state_dict
@override
def load_from_dict(self, state_dict: Dict[str, Any]) -> None:
self.llm = state_dict["llm"]
self.system_template = state_dict["system_template"]
self.instruction_template = state_dict["instruction_template"]
|
llm instance-attribute
The LLM instance to use for this task.
system_template instance-attribute
Optional system prompt template for this task.
instruction_template instance-attribute
Optional instruction prompt template for this task.