MCP¶
This tutorial will demonstrate how to integrate Model Context Protocol (MCP) with Bridgic to enhance your development in building agentic applications.
Introduction¶
Model Context Protocol (MCP) enables AI applications to access external resources and tools. By integrating MCP with Bridgic, you can:
- Connect to MCP Servers: Access a wide range of external services and resources through standardized MCP servers
- Get and Use MCP Tools: Leverage tools provided by MCP servers as workers in your Bridgic workflows
- Get and Use MCP Prompts: Utilize pre-configured prompt templates from MCP servers
- Greatly Enhance your Agentic Module: Enable LLM-driven agents to autonomously select and use MCP tools
This tutorial will walk you through the essentials of integrating MCP with Bridgic, from basic installation to advanced usage, along with easy-to-understand examples.
Installation¶
First, install the bridgic-protocols-mcp package. Since the MCP Python SDK requires Python 3.12 or newer. Please ensure you are using a compatible Python version before installation.
pip install bridgic-protocols-mcp
import os
import tempfile
from bridgic.protocols.mcp import McpServerConnectionStdio
# Create a temporary directory for the filesystem MCP server
temp_dir = os.path.realpath(tempfile.mkdtemp())
print(f"Using temporary directory: {temp_dir}")
# Create a connection to a filesystem MCP server
# Note: This requires Node.js and npx to be installed
filesystem_connection = McpServerConnectionStdio(
name="connection-filesystem-stdio",
command="npx",
args=["-y", "@modelcontextprotocol/server-filesystem", temp_dir],
)
# Establish the connection
filesystem_connection.connect()
# Verify connection
print(f"ā Connected to MCP server: {filesystem_connection.name}")
print(f" Connection status: {filesystem_connection.is_connected}")
# List available tools
tools = filesystem_connection.list_tools()
print(f"\nā Found {len(tools)} available tools:")
for tool in tools:
print(f" - {tool.tool_name}: {tool.tool_description[:50]}...")
Using temporary directory: /private/var/folders/9t/5r9fms9s5q33p6xty_0_k1mw0000gn/T/tmpeuov9ggi ā Connected to MCP server: connection-filesystem-stdio Connection status: True ā Found 14 available tools: - read_file: Read the complete contents of a file as text. DEPR... - read_text_file: Read the complete contents of a file from the file... - read_media_file: Read an image or audio file. Returns the base64 en... - read_multiple_files: Read the contents of multiple files simultaneously... - write_file: Create a new file or completely overwrite an exist... - edit_file: Make line-based edits to a text file. Each edit re... - create_directory: Create a new directory or ensure a directory exist... - list_directory: Get a detailed listing of all files and directorie... - list_directory_with_sizes: Get a detailed listing of all files and directorie... - directory_tree: Get a recursive tree view of files and directories... - move_file: Move or rename files and directories. Can move fil... - search_files: Recursively search for files and directories match... - get_file_info: Retrieve detailed metadata about a file or directo... - list_allowed_directories: Returns the list of directories that this server i...
You can also connect to an MCP server via streamable HTTP transport. Below is an example of how to connect to a remote Github MCP Server and view the tools it supports:
import os
import dotenv
from mcp.shared._httpx_utils import create_mcp_http_client
from bridgic.protocols.mcp import McpServerConnectionStreamableHttp
dotenv.load_dotenv()
github_mcp_url = os.environ.get("GITHUB_MCP_HTTP_URL", "https://api.githubcopilot.com/mcp/")
github_token = os.environ.get("GITHUB_TOKEN")
http_client = create_mcp_http_client(
headers={"Authorization": f"Bearer {github_token}"},
)
github_connection = McpServerConnectionStreamableHttp(
name="connection-github-streamable-http",
url=github_mcp_url,
http_client=http_client,
request_timeout=15,
)
github_connection.connect()
# Verify connection
print(f"ā Connected to MCP server: {github_connection.name}")
print(f" Connection status: {github_connection.is_connected}")
# List available tools
tools = github_connection.list_tools()
print(f"\nā Found {len(tools)} available tools:")
for tool in tools:
print(f" - {tool.tool_name}: {tool.tool_description[:50]}...")
ā Connected to MCP server: connection-github-streamable-http Connection status: True ā Found 40 available tools: - add_comment_to_pending_review: Add review comment to the requester's latest pendi... - add_issue_comment: Add a comment to a specific issue in a GitHub repo... - assign_copilot_to_issue: Assign Copilot to a specific issue in a GitHub rep... - create_branch: Create a new branch in a GitHub repository... - create_or_update_file: Create or update a single file in a GitHub reposit... - create_pull_request: Create a new pull request in a GitHub repository.... - create_repository: Create a new GitHub repository in your account or ... - delete_file: Delete a file from a GitHub repository... - fork_repository: Fork a GitHub repository to your account or specif... - get_commit: Get details for a commit from a GitHub repository... - get_file_contents: Get the contents of a file or directory from a Git... - get_label: Get a specific label from a repository.... - get_latest_release: Get the latest release in a GitHub repository... - get_me: Get details of the authenticated GitHub user. Use ... - get_release_by_tag: Get a specific release by its tag name in a GitHub... - get_tag: Get details about a specific git tag in a GitHub r... - get_team_members: Get member usernames of a specific team in an orga... - get_teams: Get details of the teams the user is a member of. ... - issue_read: Get information about a specific issue in a GitHub... - issue_write: Create a new or update an existing issue in a GitH... - list_branches: List branches in a GitHub repository... - list_commits: Get list of commits of a branch in a GitHub reposi... - list_issue_types: List supported issue types for repository owner (o... - list_issues: List issues in a GitHub repository. For pagination... - list_pull_requests: List pull requests in a GitHub repository. If the ... - list_releases: List releases in a GitHub repository... - list_tags: List git tags in a GitHub repository... - merge_pull_request: Merge a pull request in a GitHub repository.... - pull_request_read: Get information on a specific pull request in GitH... - pull_request_review_write: Create and/or submit, delete review of a pull requ... - push_files: Push multiple files to a GitHub repository in a si... - request_copilot_review: Request a GitHub Copilot code review for a pull re... - search_code: Fast and precise code search across ALL GitHub rep... - search_issues: Search for issues in GitHub repositories using iss... - search_pull_requests: Search for pull requests in GitHub repositories us... - search_repositories: Find GitHub repositories by name, description, rea... - search_users: Find GitHub users by username, real name, or other... - sub_issue_write: Add a sub-issue to a parent issue in a GitHub repo... - update_pull_request: Update an existing pull request in a GitHub reposi... - update_pull_request_branch: Update the branch of a pull request with the lates...
Using a MCP Tool as a Worker¶
MCP tools can be converted to Bridgic workers and integrated into GraphAutoma building. This allows you to orchestrate MCP tool calls alongside other workers in your application.
Why don't we just run the tool directly, but instead execute it as a worker? In Bridgic's view, every execution process in a workflow program (or an even more agentic system) can be decomposed into fine-grained workers, which can then be orchestrated and scheduled. Standardizing the execution process in this way simplifies development and debugging, and enhances observability during execution. Currently, too many frameworks separate agent operation from programmable orchestration, causing tools and developer-defined work units to hold unequal positions. This leads to two very different development and debugging experiences.
In Bridgic, tools have distinct specifications, but all tool executions are carried out by converting them into workers. This approach standardizes different kinds of tools as uniform and orchestratable units, facilitating their integration and scheduling alongside other workers to accomplish more complex tasks.
Let's create a simple workflow that uses MCP tools to read and write files:
import datetime
import mcp
from bridgic.core.automa import GraphAutoma, RunningOptions, worker
from bridgic.core.automa.args import System
# List the tools via the server connection
tools = filesystem_connection.list_tools()
# Filter the needed one which will create the real worker
write_tool = next(t for t in tools if t.tool_name == "write_file")
read_tool = next(t for t in tools if t.tool_name == "read_file")
meta_tool = next(t for t in tools if t.tool_name == "get_file_info")
class FileWriter(GraphAutoma):
def __init__(self, name: str, running_options: RunningOptions = None):
super().__init__(name=name, running_options=running_options)
self.add_worker("write", write_tool.create_worker())
self.add_worker("read", read_tool.create_worker())
self.add_worker("meta", meta_tool.create_worker())
@worker(is_start=True)
def start(self, title: str, content: str, rtx = System("runtime_context")):
# Get the current time
now_time = datetime.datetime.now()
# Get the content and path of the file to be written
file_path = f"{temp_dir}/{title}.txt"
file_content = (
f"Time: {now_time.strftime('%Y-%m-%d %H:%M:%S')}\n"
f"Content: {content}\n"
)
# Write the file at the next step
self.ferry_to("write", content=file_content, path=file_path)
return file_path
@worker(dependencies=["start", "write"])
def after_write(self, file_path: str, write_info: mcp.types.CallToolResult):
self.ferry_to("read", path=file_path)
self.ferry_to("meta", path=file_path)
@worker(is_output=True, dependencies=["start", "read", "meta"])
def output(self, file_path: str, read_info: mcp.types.CallToolResult, meta_info: mcp.types.CallToolResult) -> str:
return (
f"ā Finnished writting!"
f"\nFile path: {file_path}"
f"\n{meta_info.content[0].text}"
)
file_processor = FileWriter(name="file-processor")
for content in [
("1.txt", "Hello, Bridgic!"),
("2.txt", "Hello, MCP!"),
]:
result = await file_processor.arun(title=content[0], content=content[1])
print(f"\n{result}")
ā Finnished writting! File path: /private/var/folders/9t/5r9fms9s5q33p6xty_0_k1mw0000gn/T/tmpeuov9ggi/1.txt.txt size: 51 created: Sun Jan 25 2026 23:09:15 GMT+0800 (China Standard Time) modified: Sun Jan 25 2026 23:09:15 GMT+0800 (China Standard Time) accessed: Sun Jan 25 2026 23:09:15 GMT+0800 (China Standard Time) isDirectory: false isFile: true permissions: 644 ā Finnished writting! File path: /private/var/folders/9t/5r9fms9s5q33p6xty_0_k1mw0000gn/T/tmpeuov9ggi/2.txt.txt size: 47 created: Sun Jan 25 2026 23:09:15 GMT+0800 (China Standard Time) modified: Sun Jan 25 2026 23:09:15 GMT+0800 (China Standard Time) accessed: Sun Jan 25 2026 23:09:15 GMT+0800 (China Standard Time) isDirectory: false isFile: true permissions: 644
Using MCP Tools in an agentic Automa¶
In more scenarios, you may prefer to use an LLM-powered automa that can determine which MCP tools to utilize, adapting its choices to a specific goal and the evolving context during execution.
ReCentAutoma is such an agentic automa. By passing MCP tools into it, you can:
- Keep the orchestration logic in Bridgic while delegating decisions to the LLM.
- Let the LLM select appropriate tools at each step
- Collect the results of tool calls and incorporate them as part of a dynamic process
Below is a minimal example that uses the weather MCP tools inside a ReCentAutoma.
First of all, you have to connect to an MCP server that contains the weather tool.
from bridgic.protocols.mcp import McpServerConnectionStdio
weather_connection = McpServerConnectionStdio(
name="connection-weather-stdio",
command="npx",
args=["-y", "@mariox/weather-mcp-server"],
)
weather_connection.connect()
Then you can initialize an agentic automa which utilizes the weather tool(s) to answer the weather questions.
import os
import dotenv
from bridgic.llms.openai import OpenAILlm, OpenAIConfiguration
dotenv.load_dotenv()
# Prepare the LLM (set these env vars before running this cell)
_api_key = os.environ.get("OPENAI_API_KEY")
_api_base = os.environ.get("OPENAI_API_BASE", "https://api.openai.com/v1")
_model_name = os.environ.get("OPENAI_MODEL_NAME", "gpt-4o-mini")
# Initialize LLM instance
llm = OpenAILlm(
api_key=_api_key,
api_base=_api_base,
configuration=OpenAIConfiguration(model=_model_name),
timeout=180,
)
from bridgic.core.automa import RunningOptions
from bridgic.core.agentic.recent import ReCentAutoma, StopCondition
# Pass weather tools in directly to build an agentic automa as a weather agent
weather_agent = ReCentAutoma(
llm=llm,
tools=weather_connection.list_tools(),
stop_condition=StopCondition(max_iteration=5),
running_options=RunningOptions(debug=True),
)
# Ask the weather agent for the weather in Shanghai
result = await weather_agent.arun(goal="Get the weather in Shanghai.")
print(result)
[ReCentAutoma]-[ReCentAutoma-171bd997] is started. [ReCentAutoma]-[ReCentAutoma-171bd997] [__dynamic_step__] driving [initialize_task_goal] [ReCentAutoma]-[ReCentAutoma-171bd997] [__automa__] triggers [initialize_task_goal] [ReCentAutoma]-[ReCentAutoma-171bd997] šÆ Task Goal Get the weather in Shanghai. [ReCentAutoma]-[ReCentAutoma-171bd997] [__dynamic_step__] driving [observe] [ReCentAutoma]-[ReCentAutoma-171bd997] [initialize_task_goal] triggers [observe] [ReCentAutoma]-[ReCentAutoma-171bd997] š Observation Iteration: 1 Achieved: False Thinking: The task goal is to get the weather in Shanghai. However, the conversation history does not show that any information regarding the current weather in Shanghai has been provided or gathered so far. Therefore, there is a significant gap because the specific weather details are still missing. The goal has not been achieved yet. [ReCentAutoma]-[ReCentAutoma-171bd997] [__dynamic_step__] driving [select_tools, compress_memory] [ReCentAutoma]-[ReCentAutoma-171bd997] [observe] triggers [select_tools] [ReCentAutoma]-[ReCentAutoma-171bd997] [observe] triggers [compress_memory] [ReCentAutoma]-[ReCentAutoma-171bd997] š§ Memory Check Compression Needed: False [ReCentAutoma]-[ReCentAutoma-171bd997] š§ Tool Selection (No tools selected) LLM Response: To achieve the task goal of getting the weather in Shanghai, the most appropriate tool to use is the `get_weather` function, specifying "Shanghai" as the location. I will proceed to execute this function to retrieve the current weather information for Shanghai. Here we go! [ReCentAutoma]-[ReCentAutoma-171bd997] [__dynamic_step__] driving [observe] [ReCentAutoma]-[ReCentAutoma-171bd997] [select_tools] triggers [observe] [ReCentAutoma]-[ReCentAutoma-171bd997] š Observation Iteration: 2 Achieved: False Thinking: The task goal is to obtain the weather information for Shanghai. As of now, there have been no updates or details regarding the weather in Shanghai provided. Consequently, there remains a critical gap as the necessary weather information has not been collected or presented. Therefore, the goal has not been achieved. [ReCentAutoma]-[ReCentAutoma-171bd997] [__dynamic_step__] driving [select_tools, compress_memory] [ReCentAutoma]-[ReCentAutoma-171bd997] [observe] triggers [select_tools] [ReCentAutoma]-[ReCentAutoma-171bd997] [observe] triggers [compress_memory] [ReCentAutoma]-[ReCentAutoma-171bd997] š§ Memory Check Compression Needed: False [ReCentAutoma]-[ReCentAutoma-171bd997] š§ Tool Selection Tool 1: get_weather id: tool_bfecfd4db550446d8631f200e arguments: {'location': 'äøęµ·'} [ReCentAutoma]-[ReCentAutoma-171bd997] [__dynamic_step__] driving [tool-<get_weather>-<tool_bfecfd4db550446d8631f200e>] [ReCentAutoma]-[ReCentAutoma-171bd997] [select_tools] triggers [tool-<get_weather>-<tool_bfecfd4db550446d8631f200e>] [ReCentAutoma]-[ReCentAutoma-171bd997] [__dynamic_step__] driving [collect_results-<07038e9e>] [ReCentAutoma]-[ReCentAutoma-171bd997] [tool-<get_weather>-<tool_bfecfd4db550446d8631f200e>] triggers [collect_results-<07038e9e>] [ReCentAutoma]-[ReCentAutoma-171bd997] š© Tool Results Tool 1: get_weather id: tool_bfecfd4db550446d8631f200e result: meta=None content=[TextContent(type='text', text='š¤ļø **äøęµ· 天ę°äæ”ęÆ**\n\nš **ä½ē½®äæ”ęÆ:**\nšŗļø å°ē¹: äøęµ·åø, äøå½\nš åę : 31.2304, 121.4737\n\nš¤ļø **天ę°äæ”ęÆ:**\nš”ļø ęø©åŗ¦: 8.6°C\nāļø å¤©ę°: é“天\nšØ é£é: 12.3 km/h\nš§ é£å: 93°\nš§ 湿度: 77%\nš ę°å: 1026.7 hPa\nš ę¶é“: 2026-01-25T23:00\n\nš” ę°ę®ęŗ: Open-Meteo (å 蓹API)', annotations=None, meta=None)] structuredContent=None isError=False [ReCentAutoma]-[ReCentAutoma-171bd997] [__dynamic_step__] driving [observe] [ReCentAutoma]-[ReCentAutoma-171bd997] [collect_results-<07038e9e>] triggers [observe] [ReCentAutoma]-[ReCentAutoma-171bd997] š Observation Iteration: 3 Achieved: True Thinking: The current goal was to get the weather information for Shanghai. The weather data has now been successfully retrieved, including temperature, weather condition, wind speed, humidity, and pressure. There are no remaining gaps as the goal has been fully achieved with all necessary details provided. [ReCentAutoma]-[ReCentAutoma-171bd997] [__dynamic_step__] driving [finalize_answer] [ReCentAutoma]-[ReCentAutoma-171bd997] [observe] triggers [finalize_answer] [ReCentAutoma]-[ReCentAutoma-171bd997] is finished. ### Current Weather in Shanghai - **Location:** Shanghai, China - **Coordinates:** 31.2304° N, 121.4737° E #### Weather Information: - **Temperature:** 8.6°C - **Condition:** Overcast - **Wind Speed:** 12.3 km/h - **Wind Direction:** 93° (East) - **Humidity:** 77% - **Pressure:** 1026.7 hPa #### Data Source: - The information is sourced from Open-Meteo (free API). ### Summary The current weather in Shanghai indicates an overcast day with a temperature of 8.6°C, moderate winds from the east, and high humidity levels.
Using MCP Prompts to render your context¶
MCP servers can also provide prompt templates that can be used to render context for your LLM applications. These prompts are useful for standardizing how you format messages before sending them to an LLM.
You can check the available prompt templates from the server by running:
import json
from bridgic.protocols.mcp import McpPromptTemplate
prompts: list[McpPromptTemplate] = github_connection.list_prompts()
for prompt in prompts:
description = prompt.prompt_info.description
arguments = [f"[required={arg.required}] {arg.name}: {arg.description}" for arg in prompt.prompt_info.arguments]
print(
f"name: {prompt.prompt_name}:\n"
f"description: {description}\n"
f"parameters: {json.dumps(arguments, indent=2)}\n"
)
name: AssignCodingAgent: description: Assign GitHub Coding Agent to multiple tasks in a GitHub repository. parameters: [ "[required=True] repo: The repository to assign tasks in (owner/repo)." ] name: issue_to_fix_workflow: description: Create an issue for a problem and then generate a pull request to fix it parameters: [ "[required=True] owner: Repository owner", "[required=True] repo: Repository name", "[required=True] title: Issue title", "[required=True] description: Issue description", "[required=None] labels: Comma-separated list of labels to apply (optional)", "[required=None] assignees: Comma-separated list of assignees (optional)" ]
We now know that there is a prompt template named "issue_to_fix_workflow" available. This template is designed to generate instructions for LLM to help it to use tools to create an issue and a pull request on GitHub. It requires the following parameters: owner, repo, title, and description.
Let's fill in these arguments to render the prompt:
fix_issue_template = next(p for p in prompts if p.prompt_name == "issue_to_fix_workflow")
messages = fix_issue_template.format_messages(
owner="somebody",
repo="awesome-project",
title="A New bug",
description="The bug is really annoying and it have to be fixed.",
)
print(messages)
[Message(role=<Role.USER: 'user'>, blocks=[TextBlock(block_type='text', text='You are a development workflow assistant helping to create GitHub issues and generate corresponding pull requests to fix them. You should: 1) Create a well-structured issue with clear problem description, 2) Assign it to Copilot coding agent to generate a solution, and 3) Monitor the PR creation process.')], extras={}), Message(role=<Role.USER: 'user'>, blocks=[TextBlock(block_type='text', text="I need to create an issue titled 'A New bug' in somebody/awesome-project and then have a PR generated to fix it. The issue description is: The bug is really annoying and it have to be fixed.")], extras={}), Message(role=<Role.AI: 'assistant'>, blocks=[TextBlock(block_type='text', text="I'll help you create the issue 'A New bug' in somebody/awesome-project and then coordinate with Copilot to generate a fix. Let me start by creating the issue with the provided details.")], extras={}), Message(role=<Role.USER: 'user'>, blocks=[TextBlock(block_type='text', text='Perfect! Please:\n1. Create the issue with the title, description, labels, and assignees\n2. Once created, assign it to Copilot coding agent to generate a solution\n3. Monitor the process and let me know when the PR is ready for review')], extras={}), Message(role=<Role.AI: 'assistant'>, blocks=[TextBlock(block_type='text', text="Excellent plan! Here's what I'll do:\n\n1. ā
Create the issue with all specified details\n2. š¤ Assign to Copilot coding agent for automated fix\n3. š Monitor progress and notify when PR is created\n4. š Provide PR details for your review\n\nLet me start by creating the issue.")], extras={})]
The rendered messages can now be used with your LLM. You can pass them directly to the LLM's chat method or tool-selection method, or even use them as part of a larger conversation context.
The following example shows
# Convert MCP tool specifications to standard tool objects
model_tools = [tool.to_tool() for tool in github_connection.list_tools()]
# Use the rendered messages to help select tool(s)
tool_calls, _ = await llm.aselect_tool(
messages=messages,
tools=model_tools,
)
print(tool_calls)
[ToolCall(id='tool_7d890ee2619842b0add42624e', name='issue_write', arguments={'title': 'A New bug', 'body': 'The bug is really annoying and it have to be fixed.', 'method': 'create', 'owner': 'somebody', 'repo': 'awesome-project'})]
Advanced Usage¶
Multiple Server Connection Management¶
When building complex applications, you may need to connect to multiple MCP servers simultaneously. Bridgic provides McpServerConnectionManager to help you manage multiple connections efficiently.
A connection manager:
- Shares a common event loop across for the connections registered in it
- Handles the lifecycle of connections within the same event loop
- Allows to retrieve any connection by its name across your application
When you call the connect() method on a connection, it is actually automatically registered with the default manager. All operations on an MCP server connection, such as list_tools(), list_prompt() or their asynchronous peers, are internally managed by the connection manager.
If you want more control, you can explicitly choose which manager to register your connection(s) with by calling the register_connection(). This is particularly useful when it's necessary to isolate connections and their operations of certain MCP servers that contain some time-consuming tools. The execution isolation is at the thread level.
For example, browser and terminal usage are relatively time-consuming, so it's necessary to use a separate manager for connection management to prevent their execution from blocking the use of other MCP tools.
The following example demonstrates:
- Connecting to both a Cli MCP server and a Playwright MCP server simultaneously.
- Assigning each connection to a separate manager to keep their operations isolated.
import os
import tempfile
from bridgic.protocols.mcp import (
McpServerConnectionStdio,
McpServerConnectionManager,
)
temp_dir = os.path.realpath(tempfile.mkdtemp())
# Create a file with written content
with open(os.path.join(temp_dir, "dream.txt"), "w", encoding="utf-8") as f:
f.write("Bridging Logic and Magic")
cli_connection = McpServerConnectionStdio(
name="connection-cli-stdio",
command="uvx",
args=["cli-mcp-server"],
env={
"ALLOWED_DIR": temp_dir,
"ALLOWED_COMMANDS": "ls,cat,wc,pwd,echo",
"ALLOWED_FLAGS": "all",
"ALLOW_SHELL_OPERATORS": "true",
},
)
playwright_connection = McpServerConnectionStdio(
name="connection-playwright-stdio",
command="npx",
args=[
"@playwright/mcp@latest",
],
request_timeout=60,
)
# Register the two connection in different connection manager
# In this way, their operations will never block each others
McpServerConnectionManager.get_instance("terminal-use").register_connection(cli_connection)
McpServerConnectionManager.get_instance("browser-use").register_connection(playwright_connection)
# Note: registration have be done before calling `connect()` method
cli_connection.connect()
playwright_connection.connect()
# Retrieve connections by their names
print("Cli MCP server connected:", McpServerConnectionManager.get_connection("connection-cli-stdio").is_connected)
print("Playwright MCP server connected:", McpServerConnectionManager.get_connection("connection-playwright-stdio").is_connected)
Cli MCP server connected: True Playwright MCP server connected: True
Pay attention to the Connection Lifecycle¶
The lifecycle of an MCP server connection is independent from the execution of an automa: neither interact_with_human() (which pauses and raises InteractionException) nor arun() / arun(feedback_data=...) (which runs or resumes the automa) affects the connection. Once a connection is established and managed by a connection manager, it remains open until you close it.
A practical implication is that one connection can serve many executions, which is important for the development of application. The automa may pause at interact_with_human() and be resumed later with arun(feedback_data=...); each cycle can use MCP tools over the same connection without reconnecting.
The following example demonstrates a simple CLI loop: in each turn, the automa requests a human command (interrupt), the application provides the command as feedback (resume), the automa executes the CLI MCP tool, and then requests for the next commandārepeating this process to simulate user's multi-turn input. Across all these turns, the connection to the CLI MCP server is created only once (in the previous cell) and reused each time.
Please note that this example specifically simulates multi-turn humanācomputer interactions by mimicking user command input; in real-world development, developers are free to customize their own human-in-the-loop interaction flow as needed.
import uuid
from bridgic.core.automa import GraphAutoma, worker, RunningOptions
from bridgic.core.automa.interaction import Event, InteractionFeedback, InteractionException
from bridgic.core.utils._console import printer
# One MCP connection across multiple interruptāresume cycles. Specifically,
# calling interact_with_human() pauses the automa and alling arun(feedback_data=...)
# resumes it. The same connection (established in the previous cell) is reused
# on every turn to run the CLI tool.
# Define an Automa which supports human-interaction
class CliAutoma(GraphAutoma):
@worker(is_start=True)
def start(self):
printer.print(f"Welcome to the example CLI Automa.", color="gray")
self.ferry_to("human_input")
@worker()
def human_input(self):
# Interruptāresume:
# - on first run this pauses (raising InteractionException);
# - on resume we receive feedback (the human command) and continue.
event = Event(event_type="get_human_command")
feedback: InteractionFeedback = self.interact_with_human(event)
human_command = feedback.data
printer.print(f"> {human_command}")
if human_command in ["quit", "exit"]:
self.ferry_to("end")
else:
tool_key = f"tool-<{uuid.uuid4().hex[:8]}>"
collect_key = f"collect-<{uuid.uuid4().hex[:8]}>"
async def _collect_command_result(command_result: mcp.types.CallToolResult):
printer.print(f"{command_result.content[0].text.strip()}\n", color="gray")
self.ferry_to("human_input")
# Reuse the same connection across all interruptāresume cycles.
# It was established once (previous cell) and stays open.
# Each turn we fetch it here and it outlives cycle of running.
real_connection = McpServerConnectionManager.get_connection("connection-cli-stdio")
# Filter the "run_command" tool spec from cli-mcp-server.
command_tool = next(t for t in real_connection.list_tools() if t.tool_name == "run_command")
# Use the tool specification to create worker instance and then add it dynamically.
self.add_worker(tool_key, command_tool.create_worker())
self.add_func_as_worker(collect_key, _collect_command_result, dependencies=[tool_key])
self.ferry_to(tool_key, command=human_command)
@worker(is_output=True)
def end(self):
printer.print(f"See you again.\n", color="gray")
hi_automa = CliAutoma(name="human-interaction-automa", running_options=RunningOptions(debug=False))
interaction_id = None
interaction_feedback = None
async def continue_automa(feedback_data = None) -> str:
try:
await hi_automa.arun(feedback_data=feedback_data)
except InteractionException as e:
interaction_id = e.interactions[0].interaction_id
return interaction_id
# First run: automa reaches human_input, calls interact_with_human, pauses (InteractionException).
# We obtain interaction_id for the next resume.
interaction_id = await continue_automa()
# Each iteration we send the human command as feedback to resume the execution.
commands = [
"pwd",
"ls -l",
"wc -l dream.txt",
"cat dream.txt",
"exit",
]
for command in commands:
interaction_feedback = InteractionFeedback(
interaction_id=interaction_id,
data=command
)
interaction_id = await continue_automa(interaction_feedback)
Welcome to the example CLI Automa. > pwd /private/var/folders/9t/5r9fms9s5q33p6xty_0_k1mw0000gn/T/tmpr7ghhwn0 > ls -l total 8 -rw-r--r-- 1 xushili staff 24 Jan 25 23:09 dream.txt > wc -l dream.txt 0 dream.txt > cat dream.txt Bridging Logic and Magic > exit See you again.
Before shutting down your application, make sure to properly close all your connections. Finally, let's close all the connections we've created to conclude this tutorial:
filesystem_connection.close()
weather_connection.close()
github_connection.close()
cli_connection.close()
playwright_connection.close()
all_closed = all([
not filesystem_connection.is_connected,
not weather_connection.is_connected,
not github_connection.is_connected,
not cli_connection.is_connected,
not playwright_connection.is_connected,
])
print("All connections closed:", all_closed)
All connections closed: True