LLM Integration¶
Installation¶
Bridgic uses a modular installation strategy—install only the components you require.
Each model integration is available as a separate package, so you can minimize dependencies and keep your environment streamlined.
# For general OpenAI-compatible APIs, only support basic chat interfaces (Groq, Together AI, etc.)
pip install bridgic-llms-openai-like
# For OpenAI models (GPT-4, GPT-3.5, etc.)
pip install bridgic-llms-openai
# For vLLM server deployments
pip install bridgic-llms-vllm
| Package | BaseLlm | StructuredOutput | ToolSelection |
|---|---|---|---|
bridgic-llms-openai-like | ✅ | ❌ | ❌ |
bridgic-llms-openai | ✅ | ✅ | ✅ |
bridgic-llms-vllm | ✅ | ✅ | ✅ |
In [2]:
Copied!
import os
from dotenv import load_dotenv
from bridgic.llms.openai import OpenAILlm, OpenAIConfiguration
load_dotenv()
_api_key = os.environ.get("OPENAI_API_KEY")
llm = OpenAILlm(
api_key=_api_key,
)
config = OpenAIConfiguration(
model="gpt-4o",
temperature=0.7,
max_tokens=2000,
)
llm = OpenAILlm(
api_key=_api_key,
configuration=config,
timeout=30.0,
)
import os from dotenv import load_dotenv from bridgic.llms.openai import OpenAILlm, OpenAIConfiguration load_dotenv() _api_key = os.environ.get("OPENAI_API_KEY") llm = OpenAILlm( api_key=_api_key, ) config = OpenAIConfiguration( model="gpt-4o", temperature=0.7, max_tokens=2000, ) llm = OpenAILlm( api_key=_api_key, configuration=config, timeout=30.0, )
2.1 Chat¶
The most basic interface for getting a complete response from the model:
In [6]:
Copied!
from bridgic.core.model.types import Message, Role
# Create messages
messages = [
Message.from_text("You are a helpful assistant.", role=Role.SYSTEM),
Message.from_text("What is the capital of France?", role=Role.USER),
]
# Get response
response = llm.chat(
messages=messages,
model="gpt-4o",
temperature=0.7,
)
print(response.message.content)
from bridgic.core.model.types import Message, Role # Create messages messages = [ Message.from_text("You are a helpful assistant.", role=Role.SYSTEM), Message.from_text("What is the capital of France?", role=Role.USER), ] # Get response response = llm.chat( messages=messages, model="gpt-4o", temperature=0.7, ) print(response.message.content)
The capital of France is Paris.
2.2 Streaming¶
For real-time response generation:
In [8]:
Copied!
# Stream response chunks
for chunk in llm.stream(messages=messages, model="gpt-4o"):
print(chunk.delta, end="|", flush=True) # Print each chunk as it arrives
# Stream response chunks for chunk in llm.stream(messages=messages, model="gpt-4o"): print(chunk.delta, end="|", flush=True) # Print each chunk as it arrives
The| capital| of| France| is| Paris|.|
3. Advanced Protocols¶
Advanced interfaces are provided through optional protocols that providers can implement based on their capabilities.
3.1 Structured Output (StructuredOutput Protocol)¶
Generate outputs that conform to specific schemas or formats:
In [14]:
Copied!
from pydantic import BaseModel, Field
from bridgic.core.model.protocols import PydanticModel, JsonSchema
# Option 1: Using Pydantic Models
class MathProblemSolution(BaseModel):
"""Solution to a math problem with reasoning"""
reasoning: str = Field(description="Step-by-step reasoning")
answer: int = Field(description="Final numerical answer")
messages = [
Message.from_text("What is 15 * 23?", role=Role.USER)
]
# Get structured output
solution = llm.structured_output(
messages=messages,
constraint=PydanticModel(model=MathProblemSolution),
model="gpt-4o",
)
print(f"REASONING:\n\n{solution.reasoning}\n")
print(f"ANSWER: {solution.answer}\n")
from pydantic import BaseModel, Field from bridgic.core.model.protocols import PydanticModel, JsonSchema # Option 1: Using Pydantic Models class MathProblemSolution(BaseModel): """Solution to a math problem with reasoning""" reasoning: str = Field(description="Step-by-step reasoning") answer: int = Field(description="Final numerical answer") messages = [ Message.from_text("What is 15 * 23?", role=Role.USER) ] # Get structured output solution = llm.structured_output( messages=messages, constraint=PydanticModel(model=MathProblemSolution), model="gpt-4o", ) print(f"REASONING:\n\n{solution.reasoning}\n") print(f"ANSWER: {solution.answer}\n")
REASONING:
15 multiplied by 23 can be broken down into smaller, more manageable calculations using the distributive property of multiplication. Here's how:
1. **Break down 23:**
- 23 can be expressed as 20 + 3.
2. **Apply the distributive property:**
- 15 * 23 = 15 * (20 + 3)
- According to the distributive property, this can be expanded to:
- 15 * 20 + 15 * 3
3. **Calculate the individual products:**
- **15 * 20**
- 15 * 2 = 30
- Append a zero (since you are multiplying by 20, which is 10 times 2):
- 15 * 20 = 300
- **15 * 3**
- 15 * 3 = 45
4. **Add the two results together:**
- 300 + 45 = 345
Thus, using the distributive property and breaking down the numbers into simpler parts, we find that 15 multiplied by 23 equals 345.
ANSWER: 345
3.2 Tool Selection (ToolSelection Protocol)¶
Enable models to select and use tools (function calling):
In [15]:
Copied!
from bridgic.core.model.types import Tool
# Define available tools
tools = [
Tool(
name="get_weather",
description="Get the current weather for a location",
parameters={
"type": "object",
"properties": {
"location": {
"type": "string",
"description": "City name, e.g., 'San Francisco, CA'"
},
"unit": {
"type": "string",
"enum": ["celsius", "fahrenheit"],
"description": "Temperature unit"
}
},
"required": ["location"]
}
),
Tool(
name="calculate",
description="Perform mathematical calculations",
parameters={
"type": "object",
"properties": {
"expression": {
"type": "string",
"description": "Mathematical expression to evaluate"
}
},
"required": ["expression"]
}
)
]
# Model selects appropriate tool
messages = [
Message.from_text("What's the weather like in Paris?", role=Role.USER)
]
tool_calls, content = llm.select_tool(
messages=messages,
tools=tools,
model="gpt-4o",
tool_choice="auto",
)
# Process tool calls
for tool_call in tool_calls:
print(f"Tool: {tool_call.name}")
print(f"Arguments: {tool_call.arguments}")
print(f"Call ID: {tool_call.id}")
from bridgic.core.model.types import Tool # Define available tools tools = [ Tool( name="get_weather", description="Get the current weather for a location", parameters={ "type": "object", "properties": { "location": { "type": "string", "description": "City name, e.g., 'San Francisco, CA'" }, "unit": { "type": "string", "enum": ["celsius", "fahrenheit"], "description": "Temperature unit" } }, "required": ["location"] } ), Tool( name="calculate", description="Perform mathematical calculations", parameters={ "type": "object", "properties": { "expression": { "type": "string", "description": "Mathematical expression to evaluate" } }, "required": ["expression"] } ) ] # Model selects appropriate tool messages = [ Message.from_text("What's the weather like in Paris?", role=Role.USER) ] tool_calls, content = llm.select_tool( messages=messages, tools=tools, model="gpt-4o", tool_choice="auto", ) # Process tool calls for tool_call in tool_calls: print(f"Tool: {tool_call.name}") print(f"Arguments: {tool_call.arguments}") print(f"Call ID: {tool_call.id}")
Tool: get_weather
Arguments: {'location': 'Paris'}
Call ID: call_aLv7xon4zhsNVMcnLmxsGJ3v