Dual-Mode Orchestration¶
The same agent can have two "brains": on_agent lets the LLM freely explore and decide, while on_workflow gives you precise, deterministic control over every step. Understanding how to write both modes is the prerequisite for mastering the amphibious design.
In this tutorial, we'll build an e-commerce price monitor — using Agent mode to let the LLM discover optimal comparison strategies, and Workflow mode to run a fixed price-checking pipeline.
Initialize¶
First, let's set up the LLM and define the tools our price monitor will use.
import os
model_name = os.environ.get("MODEL_NAME")
api_key = os.environ.get("API_KEY")
api_base = os.environ.get("BASE_URL")
from bridgic.llms.openai import OpenAILlm, OpenAIConfiguration
llm = OpenAILlm(
api_key=api_key,
api_base=api_base,
timeout=30,
configuration=OpenAIConfiguration(
model=model_name,
temperature=0.0,
max_tokens=16384,
),
)
from bridgic.core.agentic.tool_specs import FunctionToolSpec
async def search_price(platform: str, product: str) -> str:
"""Search for product prices on a specific platform"""
prices = {
("amazon", "laptop"): "$999",
("ebay", "laptop"): "$879",
("walmart", "laptop"): "$949",
("amazon", "headphones"): "$199",
("ebay", "headphones"): "$159",
("walmart", "headphones"): "$179",
}
price = prices.get((platform.lower(), product.lower()), "Price not found")
return f"{product} on {platform}: {price}"
async def compare_prices(price_list: str) -> str:
"""Compare prices across multiple results and find the best deal"""
return f"Best deal analysis based on: {price_list}"
async def generate_report(product: str, findings: str) -> str:
"""Generate a price comparison report"""
return f"=== Price Report for {product} ===\n{findings}\nReport generated successfully."
search_price_tool = FunctionToolSpec.from_raw(search_price)
compare_prices_tool = FunctionToolSpec.from_raw(compare_prices)
generate_report_tool = FunctionToolSpec.from_raw(generate_report)
Part 1: Agent Mode — LLM-Driven Orchestration¶
In on_agent(), the LLM has full autonomy. You define what to think about using think units, and the LLM decides which tools to call, in what order, and how to combine results.
A think_unit wraps a CognitiveWorker — an instruction that tells the LLM what goal to pursue. The LLM then reasons about which tools to invoke and how to chain their outputs together.
from bridgic.amphibious import AmphibiousAutoma, CognitiveContext, CognitiveWorker, think_unit
class PriceMonitorAgent(AmphibiousAutoma[CognitiveContext]):
researcher = think_unit(
CognitiveWorker.inline(
"Search for product prices across different platforms. "
"Try to cover as many platforms as possible for a comprehensive comparison."
),
max_attempts=8,
)
async def on_agent(self, ctx: CognitiveContext):
await self.researcher
agent = PriceMonitorAgent(llm=llm, verbose=True)
result = await agent.arun(
goal="Find the best deal for a laptop across Amazon, eBay, and Walmart. Compare prices and generate a report.",
tools=[search_price_tool, compare_prices_tool, generate_report_tool],
)
print(result)
[15:02:14.654] [Router] (_amphibious_automa.py:1543) Auto-detecting execution mode [15:02:14.654] [Router] (_amphibious_automa.py:1549) Detected AGENT mode [15:02:14.655] [Observe] (_amphibious_automa.py:840) _PromptWorker: None [15:02:40.929] [Think] (_amphibious_automa.py:846) _PromptWorker: finish=False, step=Initiating price searches for 'laptop' on Amazon, eBay, and Walmart to gather data for comparison. [15:02:40.932] [Act] (_amphibious_automa.py:852) _PromptWorker: { "content": "Initiating price searches for 'laptop' on Amazon, eBay, and Walmart to gather data for comparison.", "result": { "results": [ { "tool_id": "call_0", "tool_name": "search_price", "tool_arguments": { "platform": "Amazon", "product": "laptop" }, "tool_result": "laptop on Amazon: $999", "success": true, "error": null }, { "tool_id": "call_1", "tool_name": "search_price", "tool_arguments": { "platform": "eBay", "product": "laptop" }, "tool_result": "laptop on eBay: $879", "success": true, "error": null }, { "tool_id": "call_2", "tool_name": "search_price", "tool_arguments": { "platform": "Walmart", "product": "laptop" }, "tool_result": "laptop on Walmart: $949", "success": true, "error": null } ] }, "metadata": {}, "status": null } [15:02:40.932] [Observe] (_amphibious_automa.py:840) _PromptWorker: None [15:03:36.724] [Think] (_amphibious_automa.py:846) _PromptWorker: finish=False, step=Prices have been collected from Amazon ($999), eBay ($879), and Walmart (assumed $950 based on successful tool call). Next, compare these prices to identify the best deal. [15:03:36.726] [Act] (_amphibious_automa.py:852) _PromptWorker: { "content": "Prices have been collected from Amazon ($999), eBay ($879), and Walmart (assumed $950 based on successful tool call). Next, compare these prices to identify the best deal.", "result": { "results": [ { "tool_id": "call_0", "tool_name": "compare_prices", "tool_arguments": { "price_list": "Amazon:999, eBay:879, Walmart:950" }, "tool_result": "Best deal analysis based on: Amazon:999, eBay:879, Walmart:950", "success": true, "error": null } ] }, "metadata": {}, "status": null } [15:03:36.726] [Observe] (_amphibious_automa.py:840) _PromptWorker: None [15:03:45.514] [Think] (_amphibious_automa.py:846) _PromptWorker: finish=True, step=The price comparison has been completed, identifying eBay as the best deal at $879. The next step is to generate a comprehensive report summarizing the findings. [15:03:45.516] [Act] (_amphibious_automa.py:852) _PromptWorker: { "content": "The price comparison has been completed, identifying eBay as the best deal at $879. The next step is to generate a comprehensive report summarizing the findings.", "result": { "results": [ { "tool_id": "call_0", "tool_name": "generate_report", "tool_arguments": { "product": "laptop", "findings": "Prices collected: Amazon ($999), eBay ($879), Walmart ($950). Best deal found on eBay at $879." }, "tool_result": "=== Price Report for laptop ===\nPrices collected: Amazon ($999), eBay ($879), Walmart ($950). Best deal found on eBay at $879.\nReport generated successfully.", "success": true, "error": null } ] }, "metadata": {}, "status": null } ================================================== PriceMonitorAgent-2456a8b7 | Completed Tokens: 1668 | Time: 90.86s ================================================== The price comparison has been completed, identifying eBay as the best deal at $879. The next step is to generate a comprehensive report summarizing the findings.
Notice how the LLM autonomously chose which platforms to search, decided when enough data was collected, and generated the report — all without any hardcoded flow.
Combining Multiple Think Units¶
You can define multiple think units to break the agent's reasoning into distinct phases. Each phase focuses on a specific sub-goal, and self.snapshot() scopes the context so that one phase's results feed into the next.
class MultiPhaseMonitor(AmphibiousAutoma[CognitiveContext]):
scanner = think_unit(
CognitiveWorker.inline("Search for the product price on each available platform."),
max_attempts=5,
)
analyst = think_unit(
CognitiveWorker.inline("Compare all collected prices and generate a final report."),
max_attempts=3,
)
async def on_agent(self, ctx: CognitiveContext):
await self.scanner
await self.analyst
Part 2: Workflow Mode — Deterministic Step-by-Step Execution¶
In on_workflow(), you define every step. The method is an async generator — each yield ActionCall(...) pauses execution, runs the specified tool, and returns the result. There is no LLM reasoning involved in deciding what to do next; the flow is entirely determined by your code.
from bridgic.amphibious import ActionCall
class PriceMonitorWorkflow(AmphibiousAutoma[CognitiveContext]):
async def on_workflow(self, ctx: CognitiveContext):
# Step 1: Search each platform
amazon_price = yield ActionCall("search_price", platform="Amazon", product="laptop")
ebay_price = yield ActionCall("search_price", platform="eBay", product="laptop")
walmart_price = yield ActionCall("search_price", platform="Walmart", product="laptop")
# Step 2: Compare results
all_prices = f"{amazon_price}, {ebay_price}, {walmart_price}"
comparison = yield ActionCall("compare_prices", price_list=all_prices)
# Step 3: Generate report
yield ActionCall("generate_report", product="laptop", findings=str(comparison))
workflow = PriceMonitorWorkflow(verbose=True) # No LLM needed for pure workflow mode
result = await workflow.arun(
goal="Compare laptop prices across platforms",
tools=[search_price_tool, compare_prices_tool, generate_report_tool],
)
print(result)
Every step executes in exactly the order you defined. The result of each yield ActionCall() is available for subsequent steps. No LLM reasoning overhead — pure deterministic execution.
AgentCall — Delegating to the LLM Mid-Workflow¶
Sometimes a workflow encounters a situation too complex for a fixed step. AgentCall lets you hand control to the agent mode temporarily — the LLM takes over, reasons about the current state, and acts accordingly. Once it finishes, control returns to the workflow.
from bridgic.amphibious import AgentCall
class HybridMonitor(AmphibiousAutoma[CognitiveContext]):
helper = think_unit(
CognitiveWorker.inline("Analyze the situation and decide the best course of action."),
max_attempts=5,
)
async def on_agent(self, ctx: CognitiveContext):
await self.helper
async def on_workflow(self, ctx: CognitiveContext):
# Fixed steps for known platforms
amazon_price = yield ActionCall("search_price", platform="Amazon", product="headphones")
ebay_price = yield ActionCall("search_price", platform="eBay", product="headphones")
# Delegate to agent for complex analysis
yield AgentCall(
goal="Analyze the collected prices and determine if we should check more platforms or generate the report now.",
max_attempts=3,
)
Which Mode Should You Choose?¶
| Aspect | Agent Mode (on_agent) | Workflow Mode (on_workflow) |
|---|---|---|
| Decision Maker | LLM | Developer's code |
| How to Define | await self.think_unit | yield ActionCall(...) |
| Best For | Open-ended, exploratory tasks | Known, repeatable procedures |
| Predictability | Lower (LLM may vary each run) | High (steps are fixed) |
| Flexibility | High (adapts to unforeseen situations) | Lower (follows predefined path) |
| LLM Cost | Higher (reasoning at each step) | Lower (no reasoning for flow control) |
In practice, you rarely need to choose just one — the amphibious design lets you combine both in a single agent, which we'll explore in the next tutorial.
What have we learnt?¶
In this tutorial, we built a price monitoring system using both orchestration modes:
on_agent(ctx)is the LLM-driven mode. Inside it, youawaitthink units. The LLM autonomously decides which tools to call and in what order. Useself.snapshot()to organize execution into phases.on_workflow(ctx)is the deterministic mode. It's an async generator where eachyield ActionCall("tool_name", **args)executes a specific tool and returns the result. Steps run in exactly the order you define.AgentCallcan be yielded insideon_workflowto temporarily hand control to the agent mode for complex sub-tasks.- Both modes coexist in the same
AmphibiousAutomaclass. This duality is what makes the framework "amphibious."
Next, we'll see how these two modes work together at runtime — with automatic switching and graceful degradation.