ASL¶
Welcome to ASL (Agent Structure Language) — an internal DSL(Domain Specific Language) in Python for building agents! This tutorial will walk you through the core concepts and the most commonly used features of ASL.
Overview¶
ASL is a declarative language for agent construction that follows a "what-you-see-is-what-you-get" philosophy. Once you've implemented your basic functions, ASL allows you to clearly express and define the orchestration process and hierarchical structure of your agent at a glance.
In this tutorial, you will learn:
- How to build your first agent
- How to reuse existing agents in a componentized pattern
- How to build agents with nested graph structures
- How to reuse fragments of control flow
- How to control data transmission between workers
- How to achieve dynamic topology during runtime
Building Your First Agent¶
Let's start with a simple example: a text generation agent that breaks down a user query into sub-queries and generates answers for each one.
First, let's set up the necessary environment and imports.
# Get the environment variables.
import os
_api_key = os.environ.get("OPENAI_API_KEY")
_api_base = os.environ.get("OPENAI_API_BASE")
_model_name = os.environ.get("OPENAI_MODEL_NAME")
# Import the necessary packages.
from typing import List, Dict
from bridgic.core.model.types import Message, Role
from bridgic.asl import ASLAutoma, graph
from bridgic.llms.openai import OpenAILlm, OpenAIConfiguration
llm = OpenAILlm( # the llm instance
api_base=_api_base,
api_key=_api_key,
timeout=5,
configuration=OpenAIConfiguration(model=_model_name),
)
Now, let's implement the core functions that will be used in our agent workflow.
# Break down the query into a list of sub-queries.
async def break_down_query(user_input: str) -> List[str]:
llm_response = await llm.achat(
messages=[
Message.from_text(text=f"Break down the query into multiple sub-queries and only return the sub-queries", role=Role.SYSTEM),
Message.from_text(text=user_input, role=Role.USER,),
]
)
return [item.strip() for item in llm_response.message.content.split("\n") if item.strip()]
# Generate answers for each sub-query.
async def query_answer(queries: List[str]) -> Dict[str, str]:
answers = []
for query in queries:
response = await llm.achat(
messages=[
Message.from_text(text=f"Answer the given query briefly", role=Role.SYSTEM),
Message.from_text(text=query, role=Role.USER,),
]
)
answers.append(response.message.content)
res = {
query: answer
for query, answer in zip(queries, answers)
}
return res
class SplitSolveAgent(ASLAutoma):
with graph as g:
a = break_down_query
b = query_answer
+a >> ~b
The SplitSolveAgent implementation uses ASL syntax to orchestrate the workflow. Let's break down the core ASL syntax:
Key points:
with graph as g:- Opens a graph context. Within this block, you can declare workers and define dependencies between them.a = break_down_query- Declares a worker namedathat corresponds to thebreak_down_queryfunctiona >> b- Defines a dependency:bdepends ona, meaningbwill execute only afteracompletes+a- Marksaas a start worker (entry point of the workflow)~b- Marksbas an output worker (exit point of the workflow)
Syntax Reference
In the syntax specification below, angle brackets like <GRAPH>, <KEY>, <WORKER>, etc., are metasyntax variables (placeholders) that represent entities of specific types. Replace them with actual values when writing your code.
| Syntax | Meaning | Example |
|---|---|---|
with <GRAPH> as <KEY>: | Defines a graph container with a given key. Notes: <GRAPH> can be either graph or concurrent:1. graph: Allows arbitrary orchestration of worker execution order2. concurrent: All workers added to it will execute concurrently<KEY> is required and must be unique. Without it, the program cannot identify this as an executable graph structure | with graph as g: declares a graph named g, under which you can start declaring workers and orchestrating the workflow |
<KEY> = <WORKER> | Declares and registers a worker with a unique key. Notes: <WORKER> can be one of three types:1. Python functions (defined with def or async def)2. Bridgic Worker objects (e.g., GraphAutoma, custom Worker classes)3. Another agent implemented using ASLAutoma<KEY> is required and must be unique within the graph scope. Without it, the worker cannot be identified and scheduledAll worker declarations must be written inside a with <GRAPH> as <KEY>: block, otherwise workers cannot be registered to the correct execution graph | a = break_down_query registers break_down_query as a worker with key a |
<KEY1> >> <KEY2> | Defines a dependency: The worker named <KEY2> will be triggered after the worker named <WORKER1> completes execution | a >> b means b will execute after a finishes |
+<KEY> | Marks a worker as a start worker (entry point of the execution graph) | +a marks a as the starting worker of the execution graph |
~<KEY> | Marks a worker as an output worker (exit point of the execution graph) | ~a marks a as an output worker of the execution graph |
Now let's run this agent!
text_generation_agent = SplitSolveAgent()
await text_generation_agent.arun("When and where was the Einstein born?")
{'1. When was Einstein born?': 'Albert Einstein was born in 1879.',
'2. Where was Einstein born?': 'Albert Einstein was born in Ulm, Kingdom of Württemberg, German Empire.'} Excellent! We successfully obtained the results. From the ASL code, we can clearly see that SplitSolveAgent has only two workers, with b depending on a, and no other redundant information. ASL elevates worker declaration and dependency management to first-class language constructs.
Componentized Agent Reuse¶
In the example above, we created an agent that splits queries and answers them separately. This is a reusable module. Now, let's build a chatbot that merges these individual answers into a unified response. We can directly reuse SplitSolveAgent just to write: a = SplitSolveAgent()
async def merge_answers(qa_pairs: Dict[str, str], user_input: str) -> str:
answers = "\n".join([v for v in qa_pairs.values()])
llm_response = await llm.achat(
messages=[
Message.from_text(text=f"Merge the given answers into a unified response to the original question", role=Role.SYSTEM),
Message.from_text(text=f"Query: {user_input}\nAnswers: {answers}", role=Role.USER,),
]
)
return llm_response.message.content
# Define the Chatbot agent, reuse `SplitSolveAgent` in a component-oriented parttern.
class Chatbot(ASLAutoma):
with graph as g:
a = SplitSolveAgent()
b = merge_answers
+a >> ~b
Let's run the chatbot:
chatbot = Chatbot()
await chatbot.arun(user_input="When and where was the Einstein born?")
'Albert Einstein was born in 1879 in Ulm, Kingdom of Württemberg, German Empire.'
Perfect! Our chatbot successfully answered the question. Notice that in the implementation, we directly declared SplitSolveAgent within the with graph block. We didn't need to write a wrapper function or use any API to manually add it to the graph. Everything was naturally declared and composed.
Building Agents with Nested Structures¶
If you have all the functions ready but haven't created SplitSolveAgent as a separate class, you don't need to implement it separately just to reuse it. Instead, you can define everything directly in one agent using nested graphs:
class ChatbotNested(ASLAutoma):
with graph as g: # The chatbot agent defines its graph
with graph as split_solve: # The split_solve agent defines its graph
a = break_down_query
b = query_answer
+a >> ~b
end = merge_answers
+split_solve >> ~end
Let's run it!
chatbot_nested = ChatbotNested()
await chatbot_nested.arun(user_input="When and where was the Einstein born?")
'Albert Einstein was born in 1879 in Ulm, Kingdom of Württemberg, German Empire.'
We successfully achieved the expected result! We can clearly see that ChatbotNested has two layers of graphs, and the arrangement structure of each graph is visible at a glance.
Important: workers in one graph cannot reference workers from another graph. In the above example, for graph g, workers a and b are invisible—it can only use split_solve as a whole. Similarly, split_solve cannot see workers within other graphs.
Note: If you reference a worker that doesn't exist in the current graph scope, an exception will be raised.
These are the basic usages of ASL. Next, let's explore more advanced features.
Reusing Control Flow Fragments¶
In a workflow, certain parts of the process might be common across multiple paths. For example, if you have a >> b >> c and a >> b >> d, the a >> b sequence is shared. ASL allows you to name and reuse such fragments.
Here's an example:
async def add1(x: int) -> int:
return x + 1
async def add2(x: int) -> int:
return x + 2
async def multiply(x: int) -> int:
return x * 2
async def division(x: int) -> int:
return x / 2
async def merge_result(x: int, y: int) -> int:
return x + y
class MyAutoma(ASLAutoma):
with graph as g:
a = add1
b = add2
add_process = +a >> b
c = multiply
d = division
add_multiply = add_process >> c
add_division = add_process >> d
merge = merge_result
(add_multiply & add_division) >> ~merge
In this example, we have two logical sequences: one performs two additions followed by multiplication, and the other performs two additions followed by division. We can name the common a >> b sequence as add_process and reuse it, avoiding repetition. We also named each complete logical fragment (add_multiply and add_division) and then arranged them together.
Key points:
add_process = +a >> b: Declares the orchestration logic+a >> bas a fragment namedadd_process. This fragment can be reused later to compose new execution logic without repeating the same sequence.(add_multiply & add_division): Groupsadd_multiplyandadd_divisiontogether. They will be treated as a single unit during orchestration.
Syntax Reference
| Syntax | Meaning | Example |
|---|---|---|
<KEY> = <ORCHESTRATION_EXPRESSION> | Defines and registers a fragment of a workflow. Note: The + and ~ operators cannot be applied to fragments. If you declare flow = a >> b and then use +flow, this is not equivalent to +a >> b. | flow = a >> b registers the execution logic a >> b as a fragment named flow. You can reuse this fragment later without rewriting a >> b |
<KEY1> & <KEY2> | Defines a union unit (parallel group). When an operator is applied to this unit, it operates on all elements within it, following the distributive property. | (a & b) >> c means c will execute after both a and b finish; a >> (b & c) means both b and c depend on a and will execute concurrently after a completes |
my_automa = MyAutoma()
await my_automa.arun(x=1)
10.0
Controlling Data Transmission¶
When ASL code is executed, it's first translated into corresponding Bridgic objects. For instance, a worker declared in ASL becomes a Worker during execution. Therefore, ASL inherits all the underlying capabilities of Bridgic.
Bridgic provides rich parameter resolving mechanisms. In ASL, you can utilize these by configuring the Settings and Data attributes of a worker.
Using Settings for Argument Mapping¶
The Settings class allows you to configure how arguments are mapped to workers. For example:
from bridgic.core.automa.args import ArgsMappingRule
from bridgic.asl import Settings
async def start1(user_input: int) -> int:
return user_input + 1
async def start2(user_input: int) -> int:
return user_input + 2
async def worker1(x: List[int], user_input: int) -> int:
return sum(x) + user_input
class MyAutoma(ASLAutoma):
with graph as g:
a = start1
b = start2
c = worker1 *Settings(args_mapping_rule=ArgsMappingRule.MERGE)
+(a & b) >> ~c
The above code defines this structure:
Key points:
add_process = +a >> b: Declares the orchestration logic+a >> bas a fragment namedadd_process. This fragment can be reused later to compose new execution logic without repeating the same sequence.(add_multiply & add_division): Groupsadd_multiplyandadd_divisiontogether. They will be treated as a single unit during orchestration.
Syntax Reference
| Syntax | Meaning | Example |
|---|---|---|
<WORKER> *Settings(...) | Attaches configuration settings to a worker using the * operator.Settings supports three configuration fields: • key: The unique identifier for the worker at runtime, defaults to the <KEY> from <KEY> = <WORKER>• args_mapping_rule: Defines how the worker receives results from preceding workers, defaults to ArgsMappingRule.AS_IS• result_dispatching_rule: Defines how the worker dispatches its results to subsequent workers, defaults to ResultDispatchingRule.AS_IS | worker1 *Settings(args_mapping_rule=ArgsMappingRule.MERGE) configures worker1 to merge results from multiple dependencies into a list. Other fields remain at their default values |
Note: The
keydefined inSettingsis only used internally for scheduling during the graph execution. Therefore, when orchestrating the workflow in ASL, you still need to use the<KEY>in<KEY> = <WORKER>.
Let's run it:
my_automa = MyAutoma()
await my_automa.arun(user_input=1)
6
Using Data for Argument Injection¶
Bridgic also provides an argument injection mechanism that can be used in ASL through the Data class. This allows you to inject values from other workers' results into function parameters:
from bridgic.core.automa.args import From
from bridgic.asl import Data
async def worker1(user_input: int) -> int:
return user_input + 1
async def worker2(x: int) -> int:
return x + 2
async def worker3(x: int, y: int) -> int:
return x + y
class MyAutoma(ASLAutoma):
with graph as g:
a = worker1
b = worker2
c = worker3 *Data(y=From('a'))
+a >> b >> ~c
Key points:
*Data(y=From('a')): The*operator attaches data configuration to a worker.From('a')specifies that the parameteryofworker3should be injected from the result of workeraat runtime.
Syntax Reference
| Syntax | Meaning | Example |
|---|---|---|
<WORKER> *Data(...) | Attaches data configuration to a worker using the * operator. Important Note: Data injection is equivalent to assigning a default value to a function parameter, which is dynamically injected at runtime. For example, c = worker3 *Data(y=From('a')) is conceptually equivalent to async def worker3(x: int, y: int = From('a')). This means you cannot write c = worker3 *Data(x=From('a')), because it would be equivalent to async def worker3(x: int = From('a'), y: int), which violates Python’s rule that parameters with default values cannot precede parameters without default values in a function signature. | worker3 *Data(y=From('a')) injects the result of worker a into parameter y of worker3 |
my_automa = MyAutoma()
await my_automa.arun(user_input=1)
6
Dynamic Topology at Runtime¶
Sometimes you need to dynamically adjust the graph structure based on intermediate execution results. ASL provides the ability to declare such dynamic behaviors using lambda functions.
A typical use case is creating workers dynamically based on the number of items in a list returned by a previous task. Each item needs its own handler worker, but you don't know how many handlers you'll need until runtime.
Here's an example:
from bridgic.asl import ASLField, concurrent
from bridgic.core.automa.args import ResultDispatchingRule
async def produce_task(user_input: int) -> List[int]:
tasks = [i for i in range(user_input)]
return tasks
async def task_handler(sub_task: int) -> int:
res = sub_task + 1
return res
class DynamicGraph(ASLAutoma):
with graph(user_input=ASLField(type=int)) as g:
producer = produce_task
with concurrent(tasks = ASLField(type=list, dispatching_rule=ResultDispatchingRule.IN_ORDER)) as c:
dynamic_handler = lambda tasks: (
task_handler *Settings(key=f"handler_{task}")
for task in tasks
)
+producer >> ~c
In this example, the producer worker generates a list based on user_input. Each element in this list needs to be processed by a task_handler worker. However, we don't know how many task_handler workers we'll need until runtime.
This example uses the concurrent container, which represents a graph structure where all internal workers run concurrently.
Key points:
with graph(<PARAM>=ASLField(...)): Declares input parameters for a graph usingASLFieldto specify type and behavior. Parameter names must match those used by workers within the graph.with concurrent(...): Creates a concurrent execution container where all internal workers execute concurrently.dynamic_handler = lambda ...: A lambda function that generates worker instances dynamically at runtime.
Syntax Reference
| Syntax | Meaning | Example |
|---|---|---|
with graph(<PARAM>=ASLField(...)) as <KEY> | Declares input parameters for a graph. ASLField is a field class that extends Pydantic's FieldInfo, used to specify parameter type, default values, and behavior. ASLField parameters: type (the parameter type), default (optional default value), and dispatching_rule (for concurrent containers, controls how data is distributed). Parameter names must match those used by workers within the graph. | with graph(user_input=ASLField(type=int)) as g: declares a graph parameter user_input of type int |
<KEY> = lambda <PARAM>: (...) | Defines a lambda function that generates worker instances dynamically. It receives parameters from the container and returns a generator of worker instances. ASL creates and executes these workers at runtime. | dynamic_handler = lambda tasks: (task_handler *Settings(key=f"task_{i}") for i, task in enumerate(tasks)) generates worker instances based on the tasks parameter |
Note: Lambda functions for dynamic workers must be declared within a
concurrentor other container(such assequentialin the future), not within a regulargraph.
dynamic_graph = DynamicGraph()
await dynamic_graph.arun(user_input=3)
[1, 2, 3]
Summary¶
Congratulations! You've learned the features of ASL:
- ✅ Basic syntax: Declaring workers, defining dependencies with
>>, marking start/output workers with+/~ - ✅ Component reuse: Using existing agents as building blocks
- ✅ Nested structures: Creating hierarchical graph organizations
- ✅ Fragment reuse: Naming and reusing common control flow patterns
- ✅ Data control: Using
Settingsfor argument mapping andDatafor argument injection - ✅ Dynamic topology: Creating workers dynamically at runtime using lambda functions
Syntax of ASL
| Syntax | Meaning | Example |
|---|---|---|
with <GRAPH> as <KEY>: | Defines a graph container with a given name. Notes: <GRAPH> can be either graph or concurrent:1. graph: Allows arbitrary orchestration of worker execution order2. concurrent: All workers added to it will execute concurrently<KEY> is required and must be unique. Without it, the program cannot identify this as an executable graph structure | with graph as g: declares a graph named g, under which you can start declaring workers and orchestrating the workflow |
with <GRAPH>(<PARAM>=ASLField(...)) as <KEY> | Declares input parameters for a graph. ASLField is a field class that extends Pydantic's FieldInfo, used to specify parameter type, default values, and behavior. ASLField parameters: type (the parameter type), default (optional default value), and dispatching_rule (for concurrent containers, controls how data is distributed). Parameter names must match those used by workers within the graph. | with graph(user_input=ASLField(type=int)) as g: declares a graph parameter user_input of type int |
<KEY> = <WORKER> | Declares and registers a worker with a unique key. Notes: <WORKER> can be one of three types:1. Python functions (defined with def or async def)2. Bridgic Worker objects (e.g., GraphAutoma, custom Worker classes)3. Another agent implemented using ASLAutoma<KEY> is required and must be unique within the graph scope. Without it, the worker cannot be identified and scheduledAll worker declarations must be written inside a with <GRAPH> as <KEY>: block, otherwise workers cannot be registered to the correct execution graph | a = break_down_query registers break_down_query as a worker with key a |
<KEY> = lambda <PARAM>: (...) | Defines a lambda function that generates worker instances dynamically. It receives parameters from the container and returns a generator of worker instances. ASL creates and executes these workers at runtime. | dynamic_handler = lambda tasks: (task_handler *Settings(key=f"task_{i}") for i, task in enumerate(tasks)) generates worker instances based on the tasks parameter |
<KEY1> >> <KEY2> | Defines a dependency: The worker named <KEY2> will be triggered after the worker named <WORKER1> completes execution | a >> b means b will execute after a finishes |
+<KEY> | Marks a worker as a start worker (entry point of the execution graph) | +a marks a as the starting worker of the execution graph |
~<KEY> | Marks a worker as an output worker (exit point of the execution graph) | ~a marks a as an output worker of the execution graph |
<KEY1> & <KEY2> | Defines a union unit (parallel group). When an operator is applied to this unit, it operates on all elements within it, following the distributive property. | (a & b) >> c means c will execute after both a and b finish; a >> (b & c) means both b and c depend on a and will execute concurrently after a completes |
<KEY> = <ORCHESTRATION_EXPRESSION> | Defines and registers a fragment of a workflow. Note: The + and ~ operators cannot be applied to fragments. If you declare flow = a >> b and then use +flow, this is not equivalent to +a >> b. | flow = a >> b registers the execution logic a >> b as a fragment named flow. You can reuse this fragment later without rewriting a >> b |
<WORKER> *Settings(...) | Attaches configuration settings to a worker using the * operator.Settings supports three configuration fields: • key: The unique identifier for the worker at runtime, defaults to the <KEY> from <KEY> = <WORKER>• args_mapping_rule: Defines how the worker receives results from preceding workers, defaults to ArgsMappingRule.AS_IS• result_dispatching_rule: Defines how the worker dispatches its results to subsequent workers, defaults to ResultDispatchingRule.AS_IS | worker1 *Settings(args_mapping_rule=ArgsMappingRule.MERGE) configures worker1 to merge results from multiple dependencies into a list. Other fields remain at their default values |
<WORKER> *Data(...) | Attaches data configuration to a worker using the * operator. Important Note: Data injection is equivalent to assigning a default value to a function parameter, which is dynamically injected at runtime. For example, c = worker3 *Data(y=From('a')) is conceptually equivalent to async def worker3(x: int, y: int = From('a')). This means you cannot write c = worker3 *Data(x=From('a')), because it would be equivalent to async def worker3(x: int = From('a'), y: int), which violates Python’s rule that parameters with default values cannot precede parameters without default values in a function signature. | worker3 *Data(y=From('a')) injects the result of worker a into parameter y of worker3 |
Next Steps¶
Now that you understand the basics, you're ready to:
- Explore more advanced features in the ASL reference documentation
- Build your own agents using ASL
- Learn about Bridgic's underlying mechanisms for parameter resolving and worker execution
Happy building! 🚀