Dynamic Topology¶
In the previous section of this tutorial, we learned how to use the ferry_to() API to implement dynamic routing. This capability allows us to create branching and looping logic, forming the foundation for handling dynamic behavior driven by runtime inputs. However, when we take into account the highly autonomous planning capabilities of LLMs, the dynamic features provided by ferry_to() alone are no longer sufficient.
In order to support highly autonomous AI applications, the orchestration of workers in Bridgic is built on a Dynamic Directed Graph (DDG), whose topology can change at runtime. This DDG-based architecture is especially useful in scenarios where the execution path planned by an LLM cannot be predetermined at coding time. It provides a greater degree of flexibility than the routing mechanism described earlier.
Example: Tool Selection¶
Most LLMs support tool selection and invocation — a crucial step of a typical agent loop. In the following example, we’ll demonstrate the key process of tool selection through a Travel Planning Agent, and use Bridgic’s dynamic topology to implement tool calling.
Note:
This code example is for demonstration purposes only. It represents part of the overall execution flow within a complete agent loop. If you intend to use tool calling and the agent loop in production, please use the ReActAutoma class provided by the Bridgic framework.
Run the following pip command to make sure the 'openai' integration is installed.
pip install -U bridgic
pip install -U bridgic-llms-openai
1. Initialization¶
Before we start, let's initialize the OpenAI LLM instance and the running environment.
import os
# Get the API base, API key and model name.
_api_key = os.environ.get("OPENAI_API_KEY")
_api_base = os.environ.get("OPENAI_API_BASE")
_model_name = os.environ.get("OPENAI_MODEL_NAME")
from bridgic.llms.openai import OpenAILlm, OpenAIConfiguration
llm = OpenAILlm( # the llm instance
api_base=_api_base,
api_key=_api_key,
configuration=OpenAIConfiguration(model=_model_name),
timeout=20,
)
2. Preparing Tools¶
In the travel-planning example, we need to provide several tools for the LLM to call. The following code defines these tools as functions.
# Three mock tools defined as async functions.
async def get_weather(city: str, days: int):
"""
Get the weather forecast for the next few days in a specified city.
Parameters
----------
city : str
The city to get the weather of, e.g. New York.
days : int
The number of days to get the weather forecast for.
Returns
-------
str
The weather forecast for the next few days in the specified city.
"""
return f"The weather in {city} will be mostly sunny for the next {days} days."
async def get_flight_price(origin_city: str, destination_city: str):
"""
Get the average round-trip flight price from one city to another.
Parameters
----------
origin_city : str
The origin city of the flight.
destination_city : str
The destination city of the flight.
Returns
-------
str
The average round-trip flight price from the origin city to the destination city.
"""
return f"The average round-trip flight from {origin_city} to {destination_city} is about $850."
async def get_hotel_price(city: str, nights: int):
"""
Get the average price of a hotel stay in a specified city for a given number of nights.
Parameters
----------
city : str
The city to get the hotel price of, e.g. New York.
nights : int
The number of nights to get the hotel price for.
Returns
-------
str
The average price of a hotel stay in the specified city for the given number of nights.
"""
return f"A 3-star hotel in {city} costs about $120 per night for {nights} nights."
from bridgic.core.agentic.tool_specs import FunctionToolSpec
funcs = [get_weather, get_flight_price, get_hotel_price]
tool_list = [FunctionToolSpec.from_raw(func) for func in funcs]
In the code above, three tools are defined. The docstring of each tool provides important information, which will serve as the tool descriptions presented to the LLM. Each tool is transformed to a FunctionToolSpec instance, and these three tools are stored in the tool_list variable for later use.
3. Orchestration¶
This demo consists of four steps:
- Invoke the LLM: Pass the list of available tools to the LLM and obtain its
tool_callsoutput. - Create workers dynamically: Dynamically create workers based on the
tool_callsresults. - Invoke tools: Let the Bridgic framework automatically schedule and execute those workers that represent the tools.
- Aggregate results: Combine the execution results into a list of
ToolMessageobjects, which may later be fed into the LLM for further processing.
We implement these steps by subclassing GraphAutoma:
from typing import List, Tuple, Any
from bridgic.core.automa import GraphAutoma, worker
from bridgic.core.agentic.tool_specs import ToolSpec
from bridgic.core.model.types import Message, Role, ToolCall
from bridgic.core.automa.args import From, ArgsMappingRule
from bridgic.core.agentic.types import ToolMessage
class TravelPlanner(GraphAutoma):
@worker(is_start=True)
async def invoke_llm(self, user_input: str, tool_list: List[ToolSpec]):
tool_calls, _ = await llm.aselect_tool(
messages=[
Message.from_text(text="You are an intelligent AI assistant that can perform tasks by calling available tools.", role=Role.SYSTEM),
Message.from_text(text=user_input, role=Role.USER),
],
tools=[tool.to_tool() for tool in tool_list],
)
print(f"[invoke_llm] - LLM returns tool_calls: {tool_calls}")
return tool_calls
@worker(dependencies=["invoke_llm"])
async def process_tool_calls(
self,
tool_calls: List[ToolCall],
tool_list: List[ToolSpec],
):
matched_list = self._match_tool_calls_and_tool_specs(tool_calls, tool_list)
matched_tool_calls = []
tool_worker_keys = []
for tool_call, tool_spec in matched_list:
matched_tool_calls.append(tool_call)
tool_worker = tool_spec.create_worker()
worker_key = f"tool_{tool_call.name}_{tool_call.id}"
print(f"[process_tool_calls] - add worker: {worker_key}")
self.add_worker(
key=worker_key,
worker=tool_worker,
)
self.ferry_to(worker_key, **tool_call.arguments)
tool_worker_keys.append(worker_key)
self.add_func_as_worker(
key="aggregate_results",
func=self.aggregate_results,
dependencies=tool_worker_keys,
args_mapping_rule=ArgsMappingRule.MERGE,
)
return matched_tool_calls
async def aggregate_results(
self,
tool_results: List[Any],
tool_calls: List[ToolCall] = From("process_tool_calls"),
) -> List[ToolMessage]:
print(f"[aggregate_results] - tool execution results: {tool_results}")
tool_messages = []
for tool_result, tool_call in zip(tool_results, tool_calls):
tool_messages.append(ToolMessage(
role="tool",
content=str(tool_result),
tool_call_id=tool_call.id
))
# `tool_messages` may be used as the inputs of the next LLM call...
print(f"[aggregate_results] - assembled ToolMessage list: {tool_messages}")
return tool_messages
def _match_tool_calls_and_tool_specs(
self,
tool_calls: List[ToolCall],
tool_list: List[ToolSpec],
) -> List[Tuple[ToolCall, ToolSpec]]:
matched_list: List[Tuple[ToolCall, ToolSpec]] = []
for tool_call in tool_calls:
for tool_spec in tool_list:
if tool_call.name == tool_spec.tool_name:
matched_list.append((tool_call, tool_spec))
return matched_list
In the start worker invoke_llm, the LLM is invoked to return a list of ToolCalls. Therefore, the information about the tool calls contained in this list is dynamic.
In the second worker process_tool_calls, based on the dynamic list of tool_calls, a worker is created (through tool_spec.create_worker()) for each tool to be invoked and added to the DDG. Then, the aggregate_results worker is also dynamically added to the DDG via the add_func_as_worker() API, responsible for aggregating the execution results from all the tool workers.
It is worth noting that invoking multiple tools as workers can fully leverage certain features of the Bridgic framework, such as Concurrency Mode. Here, these tools are able to execute concurrently.
4. Let's run it¶
Let's create a instance of TravelPlanner and run it.
agent = TravelPlanner()
await agent.arun(
user_input="Plan a 3-day trip to Tokyo. Check the weather forecast, estimate the flight price from San Francisco, and the hotel cost for 3 nights.",
tool_list=tool_list,
)
[invoke_llm] - LLM returns tool_calls: [ToolCall(id='call_cLERxyz110tylRxgE4XQjaRQ', name='get_weather', arguments={'city': 'Tokyo', 'days': 3}), ToolCall(id='call_CqicPm6yZoyNksEl9HGVJEOQ', name='get_flight_price', arguments={'origin_city': 'San Francisco', 'destination_city': 'Tokyo'}), ToolCall(id='call_GscwR3pvHtzR2wTki1ndpHZp', name='get_hotel_price', arguments={'city': 'Tokyo', 'nights': 3})]
[process_tool_calls] - add worker: tool_get_weather_call_cLERxyz110tylRxgE4XQjaRQ
[process_tool_calls] - add worker: tool_get_flight_price_call_CqicPm6yZoyNksEl9HGVJEOQ
[process_tool_calls] - add worker: tool_get_hotel_price_call_GscwR3pvHtzR2wTki1ndpHZp
[aggregate_results] - tool execution results: ['The weather in Tokyo will be mostly sunny for the next 3 days.', 'The average round-trip flight from San Francisco to Tokyo is about $850.', 'A 3-star hotel in Tokyo costs about $120 per night for 3 nights.']
[aggregate_results] - assembled ToolMessage list: [{'role': 'tool', 'content': 'The weather in Tokyo will be mostly sunny for the next 3 days.', 'tool_call_id': 'call_cLERxyz110tylRxgE4XQjaRQ'}, {'role': 'tool', 'content': 'The average round-trip flight from San Francisco to Tokyo is about $850.', 'tool_call_id': 'call_CqicPm6yZoyNksEl9HGVJEOQ'}, {'role': 'tool', 'content': 'A 3-star hotel in Tokyo costs about $120 per night for 3 nights.', 'tool_call_id': 'call_GscwR3pvHtzR2wTki1ndpHZp'}]
What have we learnt?¶
In this Travel Planning Agent example, we have demonstrated how to use Bridgic’s dynamic topology mechanism to create workers for tools. The GraphAutoma class is implemented as a Dynamic Directed Graph (DDG) in Bridgic, to support topology change at runtime. The APIs that support dynamic change of topology include: add_worker, add_func_as_worker, remove_worker, and add_dependency.
You might notice that interspersing these API calls within the worker implementation code can look a bit untidy. We plan to address this issue with a new feature in the near future.