Your First Amphibious Agent¶
Build your first agent in 5 minutes. In this tutorial, you'll create the same task using both Agent mode (the LLM decides what to do) and Workflow mode (you decide what to do), seeing how the same framework supports both paradigms.
Practical Scenario¶
We'll build a simple "weather information assistant" that looks up weather for cities.
- In Agent mode, the LLM autonomously decides which cities to check and how to summarize the results.
- In Workflow mode, you define the exact cities and the order in which they are queried.
Same goal, two paradigms, one framework.
Initialize¶
First, let's set up the LLM and define a simple weather tool.
import os
model_name = os.environ.get("MODEL_NAME")
api_key = os.environ.get("API_KEY")
api_base = os.environ.get("BASE_URL")
from bridgic.llms.openai import OpenAILlm, OpenAIConfiguration
llm = OpenAILlm(
api_key=api_key,
api_base=api_base,
timeout=30,
configuration=OpenAIConfiguration(
model=model_name,
temperature=0.0,
max_tokens=16384,
),
)
Define a Tool¶
Let's create a simple mock weather tool that our agent can use.
from bridgic.core.agentic.tool_specs import FunctionToolSpec
async def get_weather(city: str) -> str:
"""
Mock weather lookup.
"""
weather_data = {
"Tokyo": "Sunny, 22\u00b0C",
"London": "Cloudy, 15\u00b0C",
"New York": "Rainy, 18\u00b0C",
"Paris": "Partly cloudy, 20\u00b0C",
}
return weather_data.get(city, f"Weather data not available for {city}")
get_weather_tool = FunctionToolSpec.from_raw(get_weather)
Example 1: Agent Mode — Let the LLM Decide¶
In Agent mode, the LLM autonomously decides which tools to call and in what order. We define a CognitiveWorker (the thinking unit) and let it run inside on_agent.
from bridgic.amphibious import AmphibiousAutoma, CognitiveContext, CognitiveWorker, think_unit
class WeatherAgentMode(AmphibiousAutoma[CognitiveContext]):
planner = think_unit(
CognitiveWorker.inline(
"Look up weather information for relevant cities and provide a summary."
),
max_attempts=5,
)
async def on_agent(self, ctx: CognitiveContext):
await self.planner
Now let's run it. The LLM will decide which cities to look up.
import json
agent = WeatherAgentMode(llm=llm, verbose=True) # verbose=True will log the running process
result = await agent.arun(
goal="Check the weather in Tokyo and London, then give me a brief summary.",
tools=[get_weather_tool],
)
print(result)
[11:49:02.333] [Router] (_amphibious_automa.py:1543) Auto-detecting execution mode [11:49:02.334] [Router] (_amphibious_automa.py:1549) Detected AGENT mode [11:49:02.335] [Observe] (_amphibious_automa.py:840) _PromptWorker: None [11:49:06.752] [Think] (_amphibious_automa.py:846) _PromptWorker: finish=False, step=I need to check the weather for both Tokyo and London as requested. I'll make parallel weather lookup calls for both cities. [11:49:06.755] [Act] (_amphibious_automa.py:852) _PromptWorker: { "content": "I need to check the weather for both Tokyo and London as requested. I'll make parallel weather lookup calls for both cities.", "result": { "results": [ { "tool_id": "call_0", "tool_name": "get_weather", "tool_arguments": { "city": "Tokyo" }, "tool_result": "Sunny, 22°C", "success": true, "error": null }, { "tool_id": "call_1", "tool_name": "get_weather", "tool_arguments": { "city": "London" }, "tool_result": "Cloudy, 15°C", "success": true, "error": null } ] }, "metadata": {}, "status": null } [11:49:06.755] [Observe] (_amphibious_automa.py:840) _PromptWorker: None [11:49:10.740] [Think] (_amphibious_automa.py:846) _PromptWorker: finish=True, step=The weather information for both Tokyo and London has been successfully retrieved from the previous tool calls. Tokyo is sunny with 22°C, while London is cloudy with 15°C. I can now provide a summary to complete the task. [11:49:10.741] [Act] (_amphibious_automa.py:852) _PromptWorker: { "content": "The weather information for both Tokyo and London has been successfully retrieved from the previous tool calls. Tokyo is sunny with 22°C, while London is cloudy with 15°C. I can now provide a summary to complete the task.", "result": null, "metadata": { "tool_calls": [] }, "status": null } ================================================== WeatherAgentMode-54d344ce | Completed Tokens: 747 | Time: 8.41s ================================================== The weather information for both Tokyo and London has been successfully retrieved from the previous tool calls. Tokyo is sunny with 22°C, while London is cloudy with 15°C. I can now provide a summary to complete the task.
Accessing the Final Answer¶
Notice that arun() returned the LLM's final summary directly — not the raw context dump.
How it works:
- In Agent mode, when the LLM sets
finish=True, the framework automatically captures that step'sstep_contentas the final answer. This is the natural "conclusion" the LLM produces. - In Workflow mode, there is no LLM reasoning to produce a summary, so
arun()falls back tocontext.summary()by default. If you want a clean final answer, callself.set_final_answer("...")insideon_workflow().
You can also access agent.final_answer after arun() completes, or override the auto-captured value at any time with self.set_final_answer().
from bridgic.amphibious import ActionCall, AmphibiousAutoma, CognitiveContext
class WeatherWorkflowMode(AmphibiousAutoma[CognitiveContext]):
async def on_workflow(self, ctx: CognitiveContext):
tokyo_weather = yield ActionCall("get_weather", city="Tokyo")
london_weather = yield ActionCall("get_weather", city="London")
# Extract results and set a clean final answer
tokyo = tokyo_weather[0].result if tokyo_weather else "N/A"
london = london_weather[0].result if london_weather else "N/A"
self.set_final_answer(f"Tokyo: {tokyo}, London: {london}")
Run the workflow — notice how steps execute in the exact order you defined.
workflow = WeatherWorkflowMode() # No LLM needed for pure workflow mode
result = await workflow.arun(
goal="Check weather in Tokyo and London",
tools=[get_weather_tool],
)
print(result)
Agent Mode vs. Workflow Mode¶
| Agent Mode | Workflow Mode | |
|---|---|---|
| Who decides? | The LLM | You |
| Method | on_agent | on_workflow |
| Best for | Open-ended tasks | Known procedures |
| Predictability | Variable | Deterministic |
| LLM overhead | Higher (reasoning at each step) | Lower (no flow reasoning) |
- Agent Mode: The LLM decides what to do. You define what to think about, the LLM handles the rest. Great for open-ended tasks.
- Workflow Mode: You define every step. Predictable, repeatable, no LLM reasoning overhead. Great for known procedures.
- Both modes live inside the same
AmphibiousAutomaclass — this is the foundation of the "amphibious" design.
What have we learnt?¶
In this tutorial, we built the same weather assistant using two different approaches:
- AmphibiousAutoma is the base class for all amphibious agents. It supports both
on_agent(LLM-driven) andon_workflow(deterministic) modes. - CognitiveWorker is the atomic thinking unit. Use
CognitiveWorker.inline("...")for quick creation, or subclass it for custom behavior. - think_unit is a declarative descriptor that binds a worker with execution parameters (like
max_attempts). Useawait self.think_unitinsideon_agentto trigger a full observe-think-act cycle. - ActionCall() is used inside
on_workflowto define deterministic tool calls. Useresult = yield ActionCall("tool_name", **args)to execute and capture results. - arun() is the main entry point. Pass
goal,tools, and other configuration to start the agent.
Next, we'll dive deeper into the CognitiveWorker — the thinking atom at the heart of every amphibious agent.