Skip to content

args

The Args module provides Arguments Mapping and Arguments Injection mechanisms in Bridgic.

ArgsMappingRule

Bases: Enum

Enumeration of Arguments Mapping rules for worker parameter passing.

ArgsMappingRule defines how the return values from predecessor workers are mapped to the parameters of the current worker. This controls the data flow between workers in an automa execution graph.

Attributes:

Name Type Description
AS_IS Enum(default)

Map the results of the previous workers to the corresponding parameters in the order of dependency.

MERGE Enum

Merges all results from previous workers into a single tuple as the only argument of the current worker.

UNPACK Enum

Unpacks the result from the previous worker and passes as individual arguments. Only valid when the current worker has exactly one dependency and the return value is a list/tuple or dict.

SUPPRESSED Enum

Suppresses all results from previous workers. No arguments are passed to the current worker from its dependencies.

Examples:

class MyAutoma(GraphAutoma):
    @worker(is_start=True)
    def worker_0(self, user_input: int) -> int:
        return user_input + 1

    @worker(dependencies=["worker_0"], args_mapping_rule=ArgsMappingRule.AS_IS)
    def worker_1(self, worker_0_output: int) -> int:
        # Receives the exact return value from worker_0
        return worker_0_output + 1

    @worker(dependencies=["worker_0"], args_mapping_rule=ArgsMappingRule.UNPACK)
    def worker_2(self, user_input: int, result: int) -> int:
        # Unpacks the return value from worker_0 (assuming it returns a tuple)
        return user_input + result

    @worker(dependencies=["worker_0", "worker_1"], args_mapping_rule=ArgsMappingRule.MERGE)
    def worker_3(self, all_results: tuple) -> int:
        # Receives all results as a single tuple
        return sum(all_results)

    @worker(dependencies=["worker_3"], args_mapping_rule=ArgsMappingRule.SUPPRESSED)
    def worker_4(self, custom_input: int = 10) -> int:
        # Ignores return value from worker_3, uses custom input
        return custom_input + 1
Note
  1. AS_IS is the default mapping rule when not specified
  2. UNPACK requires exactly one dependency and a list/tuple/dict return value
  3. MERGE combines all predecessor outputs into a single tuple argument
  4. SUPPRESSED allows workers to ignore dependency outputs completely
Source code in bridgic/core/types/_common.py
class ArgsMappingRule(Enum):
    """
    Enumeration of Arguments Mapping rules for worker parameter passing.

    ArgsMappingRule defines how the return values from predecessor workers are mapped 
    to the parameters of the current worker. This controls the data flow between workers 
    in an automa execution graph.

    Attributes
    ----------
    AS_IS: Enum (default)
        Map the results of the previous workers to the corresponding parameters 
        in the order of dependency.
    MERGE: Enum
        Merges all results from previous workers into a single tuple as the 
        only argument of the current worker.
    UNPACK: Enum
        Unpacks the result from the previous worker and passes as individual 
        arguments. Only valid when the current worker has exactly one dependency and 
        the return value is a list/tuple or dict.
    SUPPRESSED: Enum
        Suppresses all results from previous workers. No arguments are passed 
        to the current worker from its dependencies.

    Examples
    --------
    ```python
    class MyAutoma(GraphAutoma):
        @worker(is_start=True)
        def worker_0(self, user_input: int) -> int:
            return user_input + 1

        @worker(dependencies=["worker_0"], args_mapping_rule=ArgsMappingRule.AS_IS)
        def worker_1(self, worker_0_output: int) -> int:
            # Receives the exact return value from worker_0
            return worker_0_output + 1

        @worker(dependencies=["worker_0"], args_mapping_rule=ArgsMappingRule.UNPACK)
        def worker_2(self, user_input: int, result: int) -> int:
            # Unpacks the return value from worker_0 (assuming it returns a tuple)
            return user_input + result

        @worker(dependencies=["worker_0", "worker_1"], args_mapping_rule=ArgsMappingRule.MERGE)
        def worker_3(self, all_results: tuple) -> int:
            # Receives all results as a single tuple
            return sum(all_results)

        @worker(dependencies=["worker_3"], args_mapping_rule=ArgsMappingRule.SUPPRESSED)
        def worker_4(self, custom_input: int = 10) -> int:
            # Ignores return value from worker_3, uses custom input
            return custom_input + 1
    ```

    Note
    ----
    1. AS_IS is the default mapping rule when not specified
    2. UNPACK requires exactly one dependency and a list/tuple/dict return value
    3. MERGE combines all predecessor outputs into a single tuple argument
    4. SUPPRESSED allows workers to ignore dependency outputs completely
    """
    AS_IS = "as_is"
    MERGE = "merge"
    UNPACK = "unpack"
    SUPPRESSED = "suppressed"

ResultDispatchingRule

Bases: Enum

Enumeration of Result Dispatch rules for worker result passing.

ResultDispatchingRule defines how the result from the current worker is dispatched to the next workers. This controls the data flow between workers in an automa execution graph.

Attributes:

Name Type Description
AS_IS Enum(default)

Gathers all results of current worker into a single tuple as the only result to the next workers.

IN_ORDER Enum

Dispatch the current worker's results to the corresponding downstream workers one by one according to the order they are declared or added.

Source code in bridgic/core/types/_common.py
class ResultDispatchingRule(Enum):
    """
    Enumeration of Result Dispatch rules for worker result passing.

    ResultDispatchingRule defines how the result from the current worker is dispatched to the next workers.
    This controls the data flow between workers in an automa execution graph.

    Attributes
    ----------
    AS_IS: Enum (default)
        Gathers all results of current worker into a single tuple as the 
        only result to the next workers.
    IN_ORDER: Enum
        Dispatch the current worker's results to the corresponding downstream 
        workers one by one according to the order they are declared or added.
    """
    AS_IS = "as_is"
    IN_ORDER = "in_order"

From dataclass

Bases: ArgsDescriptor

Implementing arguments injection for worker parameters with default value.

When a worker needs the output of another worker but does not directly depend on it in execution, you can use From to declare an arguments injection in its parameters.

Attributes:

Name Type Description
key str

The key of the worker to inject arguments from.

default Optional[Any]

The default value of the arguments.

Examples:

class MyAutoma(GraphAutoma):
    @worker(is_start=True)
    def worker_0(self, user_input: int) -> int:
        return user_input + 1

    @worker(dependencies=["worker_0"])
    def worker_1(self, worker_0_output: int) -> int:
        return worker_0_output + 1

    @worker(dependencies=["worker_1"], is_output=True)
    def worker_2(self, worker_1_output: int, worker_0_output: int = From("worker_0", 1)) -> int:
        # needs the output of worker_0 but does not directly depend on it in execution
        print(f'worker_0_output: {worker_0_output}')
        return worker_1_output + 1

Returns:

Type Description
Any

The output of the worker specified by the key.

Raises:

Type Description
WorkerArgsInjectionError

If the worker specified by the key does not exist and no default value is set.

Note:
  1. Can set a default value for a From declaration, which will be returned when the specified worker does not exist.
  2. Will raise WorkerArgsInjectionError if the worker specified by the key does not exist and no default value is set.
Source code in bridgic/core/automa/args/_args_descriptor.py
@dataclass
class From(ArgsDescriptor):
    """
    Implementing arguments injection for worker parameters with default value.

    When a worker needs the output of another worker but does not directly depend on 
    it in execution, you can use `From` to declare an arguments injection in 
    its parameters.

    Attributes
    ----------
    key : str
        The key of the worker to inject arguments from.
    default : Optional[Any]
        The default value of the arguments.

    Examples
    --------
    ```python
    class MyAutoma(GraphAutoma):
        @worker(is_start=True)
        def worker_0(self, user_input: int) -> int:
            return user_input + 1

        @worker(dependencies=["worker_0"])
        def worker_1(self, worker_0_output: int) -> int:
            return worker_0_output + 1

        @worker(dependencies=["worker_1"], is_output=True)
        def worker_2(self, worker_1_output: int, worker_0_output: int = From("worker_0", 1)) -> int:
            # needs the output of worker_0 but does not directly depend on it in execution
            print(f'worker_0_output: {worker_0_output}')
            return worker_1_output + 1
    ```

    Returns
    -------
    Any
        The output of the worker specified by the key.

    Raises
    ------
    WorkerArgsInjectionError
        If the worker specified by the key does not exist and no default value is set.

    Note:
    ------
    1. Can set a default value for a `From` declaration, which will be returned when the specified worker does not exist.
    2. Will raise `WorkerArgsInjectionError` if the worker specified by the key does not exist and no default value is set.
    """
    key: str
    default: Optional[Any] = InjectorNone()

System dataclass

Bases: ArgsDescriptor

Implementing system-level arguments injection for worker parameters.

System provides access to automa-level resources and context through arguments injection. It supports pattern matching for different types of system resources.

Attributes:

Name Type Description
key str

The system resource key to inject. Supported keys: - "runtime_context": Runtime context for data persistence across worker executions. - "automa": Current automa instance. - "automa:worker_key": Sub-automa instance in current automa.

Examples:

def worker_1(x: int, current_automa = System("automa")) -> int:
    # Access current automa instance
    current_automa.add_worker(
        key="sub_automa",
        worker=SubAutoma(),
        dependencies=["worker_1"]
    )
    return x + 1

class SubAutoma(GraphAutoma):
    @worker(is_start=True)
    def worker_0(self, user_input: int) -> int:
        return user_input + 1

class MyAutoma(GraphAutoma):
    @worker(is_start=True)
    def worker_0(self, user_input: int, rtx = System("runtime_context")) -> int:
        # Access runtime context for data persistence
        local_space = self.get_local_space(rtx)
        count = local_space.get("count", 0)
        local_space["count"] = count + 1

        self.add_func_as_worker(
            key="worker_1",
            func=worker_1,
            dependencies=["worker_0"]
        )

        return user_input + count

    @worker(dependencies=["worker_1"])
    def worker_2(self, worker_1_output: int, sub_automa = System("automa:sub_automa")) -> int:
        # Access sub-automa from worker_1
        sub_automa.add_worker(
            key="worker_3",
            worker=SubAutoma(),
            dependencies=["worker_2"],
            is_output=True,
        )
        return worker_1_output + 1

Returns:

Type Description
Any

The system resource specified by the key: - RuntimeContext: For "runtime_context" - AutomaInstance: For current automa instance or a sub-automa instance from the current automa.

Raises:

Type Description
WorkerArgsInjectionError
  • If the key pattern is not supported.
  • If the specified resource does not exist.
  • If the specified resource is not an Automa.
Note
  1. "runtime_context" provides a RuntimeContext instance for data persistence
  2. "automa" provides access to the current automa instance
  3. "automa:worker_key" provides access to a sub-automa from the specified worker key
Source code in bridgic/core/automa/args/_args_descriptor.py
@dataclass
class System(ArgsDescriptor):
    """
    Implementing system-level arguments injection for worker parameters.

    System provides access to automa-level resources and context through arguments 
    injection. It supports pattern matching for different types of system resources.

    Attributes
    ----------
    key : str
        The system resource key to inject. Supported keys:
        - "runtime_context": Runtime context for data persistence across worker executions.
        - "automa": Current automa instance.
        - "automa:worker_key": Sub-automa instance in current automa.

    Examples
    --------
    ```python
    def worker_1(x: int, current_automa = System("automa")) -> int:
        # Access current automa instance
        current_automa.add_worker(
            key="sub_automa",
            worker=SubAutoma(),
            dependencies=["worker_1"]
        )
        return x + 1

    class SubAutoma(GraphAutoma):
        @worker(is_start=True)
        def worker_0(self, user_input: int) -> int:
            return user_input + 1

    class MyAutoma(GraphAutoma):
        @worker(is_start=True)
        def worker_0(self, user_input: int, rtx = System("runtime_context")) -> int:
            # Access runtime context for data persistence
            local_space = self.get_local_space(rtx)
            count = local_space.get("count", 0)
            local_space["count"] = count + 1

            self.add_func_as_worker(
                key="worker_1",
                func=worker_1,
                dependencies=["worker_0"]
            )

            return user_input + count

        @worker(dependencies=["worker_1"])
        def worker_2(self, worker_1_output: int, sub_automa = System("automa:sub_automa")) -> int:
            # Access sub-automa from worker_1
            sub_automa.add_worker(
                key="worker_3",
                worker=SubAutoma(),
                dependencies=["worker_2"],
                is_output=True,
            )
            return worker_1_output + 1
    ```

    Returns
    -------
    Any
        The system resource specified by the key:
        - RuntimeContext: For "runtime_context"
        - AutomaInstance: For current automa instance or a sub-automa instance from the current automa.

    Raises
    ------
    WorkerArgsInjectionError
        - If the key pattern is not supported.
        - If the specified resource does not exist.
        - If the specified resource is not an Automa.

    Note
    ----
    1. "runtime_context" provides a `RuntimeContext` instance for data persistence
    2. "automa" provides access to the current automa instance
    3. "automa:worker_key" provides access to a sub-automa from the specified worker key
    """
    key: str

    def __post_init__(self):
        allowed_patterns = [
            r"^runtime_context$",
            r"^automa:.*$",
            r"^automa$",
        ]

        if not any(re.match(pattern, self.key) for pattern in allowed_patterns):
            raise WorkerArgsInjectionError(
                f"Key '{self.key}' is not supported. Supported keys: \n"
                f"- `runtime_context`: a context for data persistence of the current worker.\n"
                f"- `automa:<worker_key>`: a sub-automa in current automa.\n"
                f"- `automa`: the current automa instance.\n"
            )

InOrder dataclass

A descriptor to indicate that data should be distributed to multiple workers.

When is used to input arguments or worker with this descriptor, the data will be distributed to downstream workers instead of being gathered as a single value. Split the returned Sequence object and dispatching them in-order and element-wise to the downstream workers as their actual input.

Parameters:

Name Type Description Default
data Union[List, Tuple]

The data to be distributed. Must be a list or tuple with length matching the number of workers that will receive it.

required

Raises:

Type Description
ValueError

If the data is not a list or tuple.

Source code in bridgic/core/automa/args/_args_binding.py
@dataclass
class InOrder:
    """
    A descriptor to indicate that data should be distributed to multiple workers. 

    When is used to input arguments or worker with this descriptor, the data will be distributed
    to downstream workers instead of being gathered as a single value. Split the returned 
    Sequence object and dispatching them in-order and element-wise to the downstream workers 
    as their actual input.

    Parameters
    ----------
    data : Union[List, Tuple]
        The data to be distributed. Must be a list or tuple with length matching
        the number of workers that will receive it.

    Raises
    ------
    ValueError
        If the data is not a list or tuple.
    """
    data: Union[List, Tuple]

    def __post_init__(self):
        if not isinstance(self.data, (list, tuple)):
            raise ValueError(f"The data must be a list or tuple, but got {type(self.data)}")