Skip to content

worker

The Worker module defines the Worker concept and its related implementations in Automa.

This module provides the core abstractions and implementations of Worker, including:

  • Worker: The base class for all workers, which is the basic execution unit in Automa, defining the execution interface (arun() and run() methods) for nodes
  • CallableWorker: A worker implementation for wrapping callable objects (functions or methods)
  • WorkerCallback: A callback interface during worker execution, supporting validation, monitoring, and log collection before and after execution
  • WorkerCallbackBuilder: A builder for constructing and configuring worker callbacks

T_WorkerCallback module-attribute

T_WorkerCallback = TypeVar(
    "T_WorkerCallback", bound="WorkerCallback"
)

Type variable for WorkerCallback subclasses.

Worker

Bases: Serializable

This class is the base class for all workers.

Worker has two methods that may be overridden by the subclass:

  1. arun(): This asynchronous method should be implemented when your worker does not require almost immediately scheduling after all its task dependencies are fulfilled, and when overall workflow is not sensitive to the fair sharing of CPU resources between workers. If workers can afford to retain and occupy execution resources for their entire execution duration, and there is no explicit need for fair CPU time-sharing or timely scheduling, you should implement arun() and allow workers to run to completion as cooperative tasks within the event loop.

  2. run(): This synchronous method should be implemented when either of the following holds:

    • a. The automa includes other workers that require timely access to CPU resources (for example, workers that must respond quickly or are sensitive to scheduling latency).

    • b. The current worker itself should be scheduled as soon as all its task dependencies are met, to maintain overall workflow responsiveness. In these cases, run() enables the framework to offload your worker to a thread pool, ensuring that CPU time is shared fairly among all such workers and the event loop remains responsive.

In summary, if you are unsure whether your task require quickly scheduling or not, it is recommended to implement the arun() method. Otherwise, implement the run() ONLY if you are certain that you agree to share CPU time slices with other workers.

Source code in bridgic/core/automa/worker/_worker.py
class Worker(Serializable):
    """
    This class is the base class for all workers.

    `Worker` has two methods that may be overridden by the subclass:

    1. `arun()`: This asynchronous method should be implemented when your worker 
    does not require almost immediately scheduling after all its task dependencies 
    are fulfilled, and when overall workflow is not sensitive to the fair sharing 
    of CPU resources between workers. If workers can afford to retain and occupy 
    execution resources for their entire execution duration, and there is no 
    explicit need for fair CPU time-sharing or timely scheduling, you should 
    implement `arun()` and allow workers to run to completion as cooperative tasks 
    within the event loop.

    2. `run()`: This synchronous method should be implemented when either of the 
    following holds:

        - a. The automa includes other workers that require timely access to CPU 
        resources (for example, workers that must respond quickly or are sensitive 
        to scheduling latency).

        - b. The current worker itself should be scheduled as soon as all its task 
        dependencies are met, to maintain overall workflow responsiveness. In these 
        cases, `run()` enables the framework to offload your worker to a thread pool, 
        ensuring that CPU time is shared fairly among all such workers and the event 
        loop remains responsive.

    In summary, if you are unsure whether your task require quickly scheduling or not, 
    it is recommended to implement the `arun()` method. Otherwise, implement the 
    `run()` **ONLY** if you are certain that you agree to share CPU time slices 
    with other workers.
    """

    # TODO : Maybe process pool of the Automa is needed.

    __parent: "Automa"
    __local_space: Dict[str, Any]

    # Cached method signatures, with no need for serialization.
    __cached_param_names_of_arun: Dict[_ParameterKind, List[Tuple[str, Any]]]
    __cached_param_names_of_run: Dict[_ParameterKind, List[Tuple[str, Any]]]

    def __init__(self):
        self.__parent = self
        self.__local_space = {}

        # Cached method signatures, with no need for serialization.
        self.__cached_param_names_of_arun = None
        self.__cached_param_names_of_run = None

    async def arun(self, *args: Tuple[Any, ...], **kwargs: Dict[str, Any]) -> Any:
        """
        The asynchronous method to run the worker.
        """
        loop = asyncio.get_running_loop()
        topest_automa = self._get_top_level_automa()
        if topest_automa:
            thread_pool = topest_automa.thread_pool
            if thread_pool:
                rx_param_names_dict = self.get_input_param_names()
                rx_args, rx_kwargs = safely_map_args(args, kwargs, rx_param_names_dict)
                # kwargs can only be passed by functools.partial.
                return await loop.run_in_executor(thread_pool, partial(self.run, *rx_args, **rx_kwargs))

        # Unexpected: No thread pool is available.
        # Case 1: the worker is not inside an Automa (uncommon case).
        # Case 2: no thread pool is setup by the top-level automa.
        raise WorkerRuntimeError(f"No thread pool is available for the worker {type(self)}")

    def run(self, *args: Tuple[Any, ...], **kwargs: Dict[str, Any]) -> Any:
        """
        The synchronous method to run the worker.
        """
        raise NotImplementedError(f"run() is not implemented in {type(self)}")

    def is_top_level(self) -> bool:
        """
        Check if the current worker is the top-level worker.

        Returns
        -------
        bool
            True if the current worker is the top-level worker (parent is self), False otherwise.
        """
        return self.parent is self

    def _get_top_level_automa(self) -> Optional["Automa"]:
        """
        Get the top-level automa instance reference.
        """
        # If the current automa is the top-level automa, return itself.
        from bridgic.core.automa._automa import Automa
        if isinstance(self, Automa):
            top_level_automa = self
        else:
            top_level_automa = self.parent
        while top_level_automa and (not top_level_automa.is_top_level()):
            top_level_automa = top_level_automa.parent
        return top_level_automa

    def get_input_param_names(self) -> Dict[_ParameterKind, List[Tuple[str, Any]]]:
        """
        Get the names of input parameters of the worker.
        Use cached result if available in order to improve performance.

        This method intelligently detects whether the user has overridden the `run` method
        or is using the default `arun` method, and returns the appropriate parameter signature.

        Returns
        -------
        Dict[_ParameterKind, List[str]]
            A dictionary of input parameter names by the kind of the parameter.
            The key is the kind of the parameter, which is one of five possible values:

            - inspect.Parameter.POSITIONAL_ONLY
            - inspect.Parameter.POSITIONAL_OR_KEYWORD
            - inspect.Parameter.VAR_POSITIONAL
            - inspect.Parameter.KEYWORD_ONLY
            - inspect.Parameter.VAR_KEYWORD
        """
        # Check if user has overridden the arun method
        if self._is_arun_overridden():
            # User overrode arun method, return arun method parameters
            if self.__cached_param_names_of_arun is None:
                self.__cached_param_names_of_arun = get_param_names_all_kinds(self.arun)
            return self.__cached_param_names_of_arun
        else:
            # User is using run method, return run method parameters
            if self.__cached_param_names_of_run is None:
                self.__cached_param_names_of_run = get_param_names_all_kinds(self.run)
            return self.__cached_param_names_of_run

    def _is_arun_overridden(self) -> bool:
        """
        Check if the user has overridden the arun method.
        """
        # Compare method references - much faster than inspect.getsource()
        return self.arun.__func__ is not Worker.arun

    @property
    def parent(self) -> "Automa":
        return self.__parent

    @parent.setter
    def parent(self, value: "Automa"):
        self.__parent = value

    @property
    def local_space(self) -> Dict[str, Any]:
        return self.__local_space

    @local_space.setter
    def local_space(self, value: Dict[str, Any]):
        self.__local_space = value

    def get_report_info(self) -> Dict[str, Any]:
        report_info = {}
        report_info["local_space"] = self.__local_space
        return report_info

    @override
    def dump_to_dict(self) -> Dict[str, Any]:
        state_dict = {}
        state_dict["local_space"] = self.__local_space
        return state_dict

    @override
    def load_from_dict(self, state_dict: Dict[str, Any]) -> None:
        self.__parent = self
        self.__local_space = state_dict["local_space"]

        # Cached method signatures, with no need for serialization.
        self.__cached_param_names_of_arun = None
        self.__cached_param_names_of_run = None

    def ferry_to(self, key: str, /, *args, **kwargs):
        """
        Handoff control flow to the specified worker, passing along any arguments as needed.
        The specified worker will always start to run asynchronously in the next event loop, regardless of its dependencies.

        Parameters
        ----------
        key : str
            The key of the worker to run.
        args : optional
            Positional arguments to be passed.
        kwargs : optional
            Keyword arguments to be passed.
        """
        if self.is_top_level():
            raise WorkerRuntimeError(f"`ferry_to` method can only be called by a worker inside an Automa")
        self.parent.ferry_to(key, *args, **kwargs)

    def post_event(self, event: Event) -> None:
        """
        Post an event to the application layer outside the Automa.

        The event handler implemented by the application layer will be called in the same thread as the worker (maybe the main thread or a new thread from the thread pool).

        Note that `post_event` can be called in a non-async method or an async method.

        The event will be bubbled up to the top-level Automa, where it will be processed by the event handler registered with the event type.

        Parameters
        ----------
        event: Event
            The event to be posted.
        """
        if self.is_top_level():
            raise WorkerRuntimeError(f"`post_event` method can only be called by a worker inside an Automa")
        self.parent.post_event(event)

    def request_feedback(
        self, 
        event: Event,
        timeout: Optional[float] = None
    ) -> Feedback:
        """
        Request feedback for the specified event from the application layer outside the Automa. This method blocks the caller until the feedback is received.

        Note that `post_event` should only be called from within a non-async method running in the new thread of the Automa thread pool.

        Parameters
        ----------
        event: Event
            The event to be posted to the event handler implemented by the application layer.
        timeout: Optional[float]
            A float or int number of seconds to wait for if the feedback is not received. If None, then there is no limit on the wait time.

        Returns
        -------
        Feedback
            The feedback received from the application layer.

        Raises
        ------
        TimeoutError
            If the feedback is not received before the timeout. Note that the raised exception is the built-in `TimeoutError` exception, instead of asyncio.TimeoutError or concurrent.futures.TimeoutError!
        """
        if self.is_top_level():
            raise WorkerRuntimeError(f"`request_feedback` method can only be called by a worker inside an Automa")
        return self.parent.request_feedback(event, timeout)

    async def request_feedback_async(
        self, 
        event: Event,
        timeout: Optional[float] = None
    ) -> Feedback:
        """
        Request feedback for the specified event from the application layer outside the Automa. This method blocks the caller until the feedback is received.

        The event handler implemented by the application layer will be called in the next event loop, in the main thread.

        Note that `post_event` should only be called from within an asynchronous method running in the main event loop of the top-level Automa.

        Parameters
        ----------
        event: Event
            The event to be posted to the event handler implemented by the application layer.
        timeout: Optional[float]
            A float or int number of seconds to wait for if the feedback is not received. If None, then there is no limit on the wait time.

        Returns
        -------
        Feedback
            The feedback received from the application layer.

        Raises
        ------
        TimeoutError
            If the feedback is not received before the timeout. Note that the raised exception is the built-in `TimeoutError` exception, instead of asyncio.TimeoutError!
        """
        if self.is_top_level():
            raise WorkerRuntimeError(f"`request_feedback_async` method can only be called by a worker inside an Automa")
        return await self.parent.request_feedback_async(event, timeout)

    def interact_with_human(self, event: Event) -> InteractionFeedback:
        if self.is_top_level():
            raise WorkerRuntimeError(f"`interact_with_human` method can only be called by a worker inside an Automa")
        return self.parent.interact_with_human(event, self)

arun

async
arun(
    *args: Tuple[Any, ...], **kwargs: Dict[str, Any]
) -> Any

The asynchronous method to run the worker.

Source code in bridgic/core/automa/worker/_worker.py
async def arun(self, *args: Tuple[Any, ...], **kwargs: Dict[str, Any]) -> Any:
    """
    The asynchronous method to run the worker.
    """
    loop = asyncio.get_running_loop()
    topest_automa = self._get_top_level_automa()
    if topest_automa:
        thread_pool = topest_automa.thread_pool
        if thread_pool:
            rx_param_names_dict = self.get_input_param_names()
            rx_args, rx_kwargs = safely_map_args(args, kwargs, rx_param_names_dict)
            # kwargs can only be passed by functools.partial.
            return await loop.run_in_executor(thread_pool, partial(self.run, *rx_args, **rx_kwargs))

    # Unexpected: No thread pool is available.
    # Case 1: the worker is not inside an Automa (uncommon case).
    # Case 2: no thread pool is setup by the top-level automa.
    raise WorkerRuntimeError(f"No thread pool is available for the worker {type(self)}")

run

run(
    *args: Tuple[Any, ...], **kwargs: Dict[str, Any]
) -> Any

The synchronous method to run the worker.

Source code in bridgic/core/automa/worker/_worker.py
def run(self, *args: Tuple[Any, ...], **kwargs: Dict[str, Any]) -> Any:
    """
    The synchronous method to run the worker.
    """
    raise NotImplementedError(f"run() is not implemented in {type(self)}")

is_top_level

is_top_level() -> bool

Check if the current worker is the top-level worker.

Returns:

Type Description
bool

True if the current worker is the top-level worker (parent is self), False otherwise.

Source code in bridgic/core/automa/worker/_worker.py
def is_top_level(self) -> bool:
    """
    Check if the current worker is the top-level worker.

    Returns
    -------
    bool
        True if the current worker is the top-level worker (parent is self), False otherwise.
    """
    return self.parent is self

get_input_param_names

get_input_param_names() -> (
    Dict[_ParameterKind, List[Tuple[str, Any]]]
)

Get the names of input parameters of the worker. Use cached result if available in order to improve performance.

This method intelligently detects whether the user has overridden the run method or is using the default arun method, and returns the appropriate parameter signature.

Returns:

Type Description
Dict[_ParameterKind, List[str]]

A dictionary of input parameter names by the kind of the parameter. The key is the kind of the parameter, which is one of five possible values:

  • inspect.Parameter.POSITIONAL_ONLY
  • inspect.Parameter.POSITIONAL_OR_KEYWORD
  • inspect.Parameter.VAR_POSITIONAL
  • inspect.Parameter.KEYWORD_ONLY
  • inspect.Parameter.VAR_KEYWORD
Source code in bridgic/core/automa/worker/_worker.py
def get_input_param_names(self) -> Dict[_ParameterKind, List[Tuple[str, Any]]]:
    """
    Get the names of input parameters of the worker.
    Use cached result if available in order to improve performance.

    This method intelligently detects whether the user has overridden the `run` method
    or is using the default `arun` method, and returns the appropriate parameter signature.

    Returns
    -------
    Dict[_ParameterKind, List[str]]
        A dictionary of input parameter names by the kind of the parameter.
        The key is the kind of the parameter, which is one of five possible values:

        - inspect.Parameter.POSITIONAL_ONLY
        - inspect.Parameter.POSITIONAL_OR_KEYWORD
        - inspect.Parameter.VAR_POSITIONAL
        - inspect.Parameter.KEYWORD_ONLY
        - inspect.Parameter.VAR_KEYWORD
    """
    # Check if user has overridden the arun method
    if self._is_arun_overridden():
        # User overrode arun method, return arun method parameters
        if self.__cached_param_names_of_arun is None:
            self.__cached_param_names_of_arun = get_param_names_all_kinds(self.arun)
        return self.__cached_param_names_of_arun
    else:
        # User is using run method, return run method parameters
        if self.__cached_param_names_of_run is None:
            self.__cached_param_names_of_run = get_param_names_all_kinds(self.run)
        return self.__cached_param_names_of_run

ferry_to

ferry_to(key: str, /, *args, **kwargs)

Handoff control flow to the specified worker, passing along any arguments as needed. The specified worker will always start to run asynchronously in the next event loop, regardless of its dependencies.

Parameters:

Name Type Description Default
key str

The key of the worker to run.

required
args optional

Positional arguments to be passed.

()
kwargs optional

Keyword arguments to be passed.

{}
Source code in bridgic/core/automa/worker/_worker.py
def ferry_to(self, key: str, /, *args, **kwargs):
    """
    Handoff control flow to the specified worker, passing along any arguments as needed.
    The specified worker will always start to run asynchronously in the next event loop, regardless of its dependencies.

    Parameters
    ----------
    key : str
        The key of the worker to run.
    args : optional
        Positional arguments to be passed.
    kwargs : optional
        Keyword arguments to be passed.
    """
    if self.is_top_level():
        raise WorkerRuntimeError(f"`ferry_to` method can only be called by a worker inside an Automa")
    self.parent.ferry_to(key, *args, **kwargs)

post_event

post_event(event: Event) -> None

Post an event to the application layer outside the Automa.

The event handler implemented by the application layer will be called in the same thread as the worker (maybe the main thread or a new thread from the thread pool).

Note that post_event can be called in a non-async method or an async method.

The event will be bubbled up to the top-level Automa, where it will be processed by the event handler registered with the event type.

Parameters:

Name Type Description Default
event Event

The event to be posted.

required
Source code in bridgic/core/automa/worker/_worker.py
def post_event(self, event: Event) -> None:
    """
    Post an event to the application layer outside the Automa.

    The event handler implemented by the application layer will be called in the same thread as the worker (maybe the main thread or a new thread from the thread pool).

    Note that `post_event` can be called in a non-async method or an async method.

    The event will be bubbled up to the top-level Automa, where it will be processed by the event handler registered with the event type.

    Parameters
    ----------
    event: Event
        The event to be posted.
    """
    if self.is_top_level():
        raise WorkerRuntimeError(f"`post_event` method can only be called by a worker inside an Automa")
    self.parent.post_event(event)

request_feedback

request_feedback(
    event: Event, timeout: Optional[float] = None
) -> Feedback

Request feedback for the specified event from the application layer outside the Automa. This method blocks the caller until the feedback is received.

Note that post_event should only be called from within a non-async method running in the new thread of the Automa thread pool.

Parameters:

Name Type Description Default
event Event

The event to be posted to the event handler implemented by the application layer.

required
timeout Optional[float]

A float or int number of seconds to wait for if the feedback is not received. If None, then there is no limit on the wait time.

None

Returns:

Type Description
Feedback

The feedback received from the application layer.

Raises:

Type Description
TimeoutError

If the feedback is not received before the timeout. Note that the raised exception is the built-in TimeoutError exception, instead of asyncio.TimeoutError or concurrent.futures.TimeoutError!

Source code in bridgic/core/automa/worker/_worker.py
def request_feedback(
    self, 
    event: Event,
    timeout: Optional[float] = None
) -> Feedback:
    """
    Request feedback for the specified event from the application layer outside the Automa. This method blocks the caller until the feedback is received.

    Note that `post_event` should only be called from within a non-async method running in the new thread of the Automa thread pool.

    Parameters
    ----------
    event: Event
        The event to be posted to the event handler implemented by the application layer.
    timeout: Optional[float]
        A float or int number of seconds to wait for if the feedback is not received. If None, then there is no limit on the wait time.

    Returns
    -------
    Feedback
        The feedback received from the application layer.

    Raises
    ------
    TimeoutError
        If the feedback is not received before the timeout. Note that the raised exception is the built-in `TimeoutError` exception, instead of asyncio.TimeoutError or concurrent.futures.TimeoutError!
    """
    if self.is_top_level():
        raise WorkerRuntimeError(f"`request_feedback` method can only be called by a worker inside an Automa")
    return self.parent.request_feedback(event, timeout)

request_feedback_async

async
request_feedback_async(
    event: Event, timeout: Optional[float] = None
) -> Feedback

Request feedback for the specified event from the application layer outside the Automa. This method blocks the caller until the feedback is received.

The event handler implemented by the application layer will be called in the next event loop, in the main thread.

Note that post_event should only be called from within an asynchronous method running in the main event loop of the top-level Automa.

Parameters:

Name Type Description Default
event Event

The event to be posted to the event handler implemented by the application layer.

required
timeout Optional[float]

A float or int number of seconds to wait for if the feedback is not received. If None, then there is no limit on the wait time.

None

Returns:

Type Description
Feedback

The feedback received from the application layer.

Raises:

Type Description
TimeoutError

If the feedback is not received before the timeout. Note that the raised exception is the built-in TimeoutError exception, instead of asyncio.TimeoutError!

Source code in bridgic/core/automa/worker/_worker.py
async def request_feedback_async(
    self, 
    event: Event,
    timeout: Optional[float] = None
) -> Feedback:
    """
    Request feedback for the specified event from the application layer outside the Automa. This method blocks the caller until the feedback is received.

    The event handler implemented by the application layer will be called in the next event loop, in the main thread.

    Note that `post_event` should only be called from within an asynchronous method running in the main event loop of the top-level Automa.

    Parameters
    ----------
    event: Event
        The event to be posted to the event handler implemented by the application layer.
    timeout: Optional[float]
        A float or int number of seconds to wait for if the feedback is not received. If None, then there is no limit on the wait time.

    Returns
    -------
    Feedback
        The feedback received from the application layer.

    Raises
    ------
    TimeoutError
        If the feedback is not received before the timeout. Note that the raised exception is the built-in `TimeoutError` exception, instead of asyncio.TimeoutError!
    """
    if self.is_top_level():
        raise WorkerRuntimeError(f"`request_feedback_async` method can only be called by a worker inside an Automa")
    return await self.parent.request_feedback_async(event, timeout)

CallableWorker

Bases: Worker

This class is a worker that wraps a callable object, such as functions or methods.

Parameters:

Name Type Description Default
func_or_method Optional[Callable]

The callable to be wrapped by the worker. If func_or_method is None, state_dict must be provided.

None
Source code in bridgic/core/automa/worker/_callable_worker.py
class CallableWorker(Worker):
    """
    This class is a worker that wraps a callable object, such as functions or methods.

    Parameters
    ----------
    func_or_method : Optional[Callable]
        The callable to be wrapped by the worker. If `func_or_method` is None, 
        `state_dict` must be provided.
    """
    _is_async: bool
    _callable: Callable
    # Used to deserialization.
    _expected_bound_parent: bool

    # Cached method signatures, with no need for serialization.
    __cached_param_names_of_callable: Dict[_ParameterKind, List[str]]

    def __init__(
        self, 
        func_or_method: Optional[Callable] = None,
    ):
        """
        Parameters
        ----------
        func_or_method : Optional[Callable]
            The callable to be wrapped by the worker. If `func_or_method` is None, 
            `state_dict` must be provided.
        """
        super().__init__()
        self._is_async = inspect.iscoroutinefunction(func_or_method)
        self._callable = func_or_method
        self._expected_bound_parent = False

        # Cached method signatures, with no need for serialization.
        self.__cached_param_names_of_callable = None

    async def arun(self, *args: Tuple[Any, ...], **kwargs: Dict[str, Any]) -> Any:
        if self._expected_bound_parent:
            raise WorkerRuntimeError(
                f"The callable is expected to be bound to the parent, "
                f"but not bounded yet: {self._callable}"
            )
        if self._is_async:
            return await self._callable(*args, **kwargs)
        return await super().arun(*args, **kwargs)

    def run(self, *args: Tuple[Any, ...], **kwargs: Dict[str, Any]) -> Any:
        assert self._is_async is False
        return self._callable(*args, **kwargs)

    @override
    def get_input_param_names(self) -> Dict[_ParameterKind, List[str]]:
        """
        Get the names of input parameters of this callable worker.
        Use cached result if available in order to improve performance.

        Returns
        -------
        Dict[_ParameterKind, List[str]]
            A dictionary of input parameter names by the kind of the parameter.
            The key is the kind of the parameter, which is one of five possible values:

            - inspect.Parameter.POSITIONAL_ONLY
            - inspect.Parameter.POSITIONAL_OR_KEYWORD
            - inspect.Parameter.VAR_POSITIONAL
            - inspect.Parameter.KEYWORD_ONLY
            - inspect.Parameter.VAR_KEYWORD
        """
        if self.__cached_param_names_of_callable is None:
            self.__cached_param_names_of_callable = get_param_names_all_kinds(self._callable)
        return self.__cached_param_names_of_callable

    @override
    def dump_to_dict(self) -> Dict[str, Any]:
        state_dict = super().dump_to_dict()
        state_dict["is_async"] = self._is_async
        # Note: Not to use pickle to serialize the callable here.
        # We customize the serialization method of the callable to avoid creating instance multiple times and to minimize side effects.
        bounded = isinstance(self._callable, MethodType)
        state_dict["bounded"] = bounded
        if bounded:
            if self._callable.__self__ is self.parent:
                state_dict["callable_name"] = self._callable.__module__ + "." + self._callable.__qualname__
            else:
                state_dict["pickled_callable"] = pickle.dumps(self._callable)
        else:
            state_dict["callable_name"] = self._callable.__module__ + "." + self._callable.__qualname__
        return state_dict

    @override
    def load_from_dict(self, state_dict: Dict[str, Any]) -> None:
        super().load_from_dict(state_dict)
        # Deserialize from the state_dict.
        self._is_async = state_dict["is_async"]
        bounded = state_dict["bounded"]
        if bounded:
            pickled_callable = state_dict.get("pickled_callable", None)
            if pickled_callable is None:
                self._callable = load_qualified_class_or_func(state_dict["callable_name"])
                # Partially deserialized, need to be bound to the parent.
                self._expected_bound_parent = True
            else:
                self._callable = pickle.loads(pickled_callable)
                self._expected_bound_parent = False
        else:
            self._callable = load_qualified_class_or_func(state_dict["callable_name"])
            self._expected_bound_parent = False

        # Cached method signatures, with no need for serialization.
        self.__cached_param_names_of_callable = None

    @property
    def callable(self):
        return self._callable

    @property
    def parent(self) -> "Automa":
        return super().parent

    @parent.setter
    def parent(self, value: "Automa"):
        if self._expected_bound_parent:
            self._callable = MethodType(self._callable, value)
            self._expected_bound_parent = False
        Worker.parent.fset(self, value)

    @override
    def __str__(self) -> str:
        return f"CallableWorker(callable={self._callable.__name__})"

get_input_param_names

get_input_param_names() -> Dict[_ParameterKind, List[str]]

Get the names of input parameters of this callable worker. Use cached result if available in order to improve performance.

Returns:

Type Description
Dict[_ParameterKind, List[str]]

A dictionary of input parameter names by the kind of the parameter. The key is the kind of the parameter, which is one of five possible values:

  • inspect.Parameter.POSITIONAL_ONLY
  • inspect.Parameter.POSITIONAL_OR_KEYWORD
  • inspect.Parameter.VAR_POSITIONAL
  • inspect.Parameter.KEYWORD_ONLY
  • inspect.Parameter.VAR_KEYWORD
Source code in bridgic/core/automa/worker/_callable_worker.py
@override
def get_input_param_names(self) -> Dict[_ParameterKind, List[str]]:
    """
    Get the names of input parameters of this callable worker.
    Use cached result if available in order to improve performance.

    Returns
    -------
    Dict[_ParameterKind, List[str]]
        A dictionary of input parameter names by the kind of the parameter.
        The key is the kind of the parameter, which is one of five possible values:

        - inspect.Parameter.POSITIONAL_ONLY
        - inspect.Parameter.POSITIONAL_OR_KEYWORD
        - inspect.Parameter.VAR_POSITIONAL
        - inspect.Parameter.KEYWORD_ONLY
        - inspect.Parameter.VAR_KEYWORD
    """
    if self.__cached_param_names_of_callable is None:
        self.__cached_param_names_of_callable = get_param_names_all_kinds(self._callable)
    return self.__cached_param_names_of_callable

WorkerCallback

Bases: Serializable

Callback for the execution of a worker instance during the running of a prebult automa.

This class defines the interfaces that will be called before or after the execution of the corresponding worker. Callbacks are typically used for validating input, monitoring execution, and collecting logs, etc.

Methods:

Name Description
on_worker_start

Hook invoked before worker execution.

on_worker_end

Hook invoked after worker execution.

on_worker_error

Hook invoked when worker execution raises an exception.

Source code in bridgic/core/automa/worker/_worker_callback.py
class WorkerCallback(Serializable):
    """
    Callback for the execution of a worker instance during the running 
    of a prebult automa.

    This class defines the interfaces that will be called before or after 
    the execution of the corresponding worker. Callbacks are typically used 
    for validating input, monitoring execution, and collecting logs, etc.

    Methods
    -------
    on_worker_start(key, is_top_level, parent, arguments)
        Hook invoked before worker execution.
    on_worker_end(key, is_top_level, parent, arguments, result)
        Hook invoked after worker execution.
    on_worker_error(key, is_top_level, parent, arguments, error)
        Hook invoked when worker execution raises an exception.
    """
    async def on_worker_start(
        self, 
        key: str,
        is_top_level: bool = False,
        parent: Optional["Automa"] = None,
        arguments: Dict[str, Any] = None,
    ) -> None:
        """
        Hook invoked before worker execution.

        Called immediately before the worker runs. Use for arguments
        validation, logging, or monitoring. Cannot modify execution
        arguments or logic.

        Parameters
        ----------
        key : str
            Worker identifier.
        is_top_level: bool = False
            Whether the worker is the top-level automa. When True, parent will be the automa itself (parent is self).
        parent : Optional[Automa] = None
            Parent automa instance containing this worker. For top-level automa, parent is the automa itself.
        arguments : Dict[str, Any] = None
            Execution parameters with keys "args" and "kwargs".
        """
        pass

    async def on_worker_end(
        self,
        key: str,
        is_top_level: bool = False,
        parent: Optional["Automa"] = None,
        arguments: Dict[str, Any] = None,
        result: Any = None,
    ) -> None:
        """
        Hook invoked after worker execution.

        Called immediately after the worker completes. Use for result
        monitoring, logging, event publishing, or validation. Cannot
        modify execution results or logic.

        Parameters
        ----------
        key : str
            Worker identifier.
        is_top_level: bool = False
            Whether the worker is the top-level automa. When True, parent will be the automa itself (parent is self).
        parent : Optional[Automa] = None
            Parent automa instance containing this worker. For top-level automa, parent is the automa itself.
        arguments : Dict[str, Any] = None
            Execution arguments with keys "args" and "kwargs".
        result : Any = None
            Worker execution result.
        """
        pass

    async def on_worker_error(
        self,
        key: str,
        is_top_level: bool = False,
        parent: Optional["Automa"] = None,
        arguments: Dict[str, Any] = None,
        error: Exception = None,
    ) -> bool:
        """
        Hook invoked when worker execution raises an exception.

        Called when the worker execution raises an exception. Use for error handling, logging, 
        or event publishing. Cannot modify execution logic or arguments.

        **Exception Matching Mechanism: How to Handle a Specific Exception**

        The framework enable your callback to handle a given exception based on the 
        **type annotation** of the `error` parameter in your `on_worker_error` method.
        The matching follows these rules:

        - The parameter name MUST be `error` and the type annotation is critical for the 
          matching mechanism.
        - If you annotate `error: ValueError`, it will match `ValueError` and all its 
          subclasses (e.g., `UnicodeDecodeError`).
        - If you annotate `error: Exception`, it will match all exceptions (since all exceptions 
          inherit from Exception).
        - If you want to match multiple exception types, you can use `Union[Type1, Type2, ...]`.


        **Return Value: Whether to Suppress the Exception**

        - If `on_worker_error` returns `True`, the framework will suppress the exception. 
          The framework will then proceed as if there was no error, and the worker result 
          will be set to None.
        - If `on_worker_error` returns `False`, the framework will simply observe the error; 
          after all matching callbacks are called, the framework will re-raise the exception.

        **Special Case: Interaction Exceptions Cannot Be Suppressed**

        To ensure human-interaction mechanisms work correctly, exceptions of type
        `_InteractionEventException` or `InteractionException` (including their subclasses) 
        **CANNOT** be suppressed by any callback. Even if your callback returns `True`, the 
        framework will forcibly re-raise the exception. This ensures these exceptions always 
        propagate correctly through the automa hierarchy to trigger necessary human interactions.

        Parameters
        ----------
        key : str
            Worker identifier.
        is_top_level: bool = False
            Whether the worker is the top-level automa. When True, parent will be the automa itself (parent is self).
        parent : Optional[Automa] = None
            Parent automa instance containing this worker. For top-level automa, parent is the automa itself.
        arguments : Dict[str, Any] = None
            Execution arguments with keys "args" and "kwargs".
        error : Exception = None
            The exception raised during worker execution. The type annotation of this
            parameter determines which exceptions this callback will handle. The matching
            is based on inheritance relationship (using isinstance), so a callback with
            `error: ValueError` will match ValueError and all its subclasses.

        Returns
        -------
        bool
            True if the automa should suppress the exception (not re-raise it); False otherwise.
        """
        return False

    @override
    def dump_to_dict(self) -> Dict[str, Any]:
        return {
            "callback_cls": self.__class__.__module__ + "." + self.__class__.__qualname__,
        }

    @override
    def load_from_dict(self, state_dict: Dict[str, Any]) -> None:
        pass

on_worker_start

async
on_worker_start(
    key: str,
    is_top_level: bool = False,
    parent: Optional[Automa] = None,
    arguments: Dict[str, Any] = None,
) -> None

Hook invoked before worker execution.

Called immediately before the worker runs. Use for arguments validation, logging, or monitoring. Cannot modify execution arguments or logic.

Parameters:

Name Type Description Default
key str

Worker identifier.

required
is_top_level bool

Whether the worker is the top-level automa. When True, parent will be the automa itself (parent is self).

False
parent Optional[Automa] = None

Parent automa instance containing this worker. For top-level automa, parent is the automa itself.

None
arguments Dict[str, Any] = None

Execution parameters with keys "args" and "kwargs".

None
Source code in bridgic/core/automa/worker/_worker_callback.py
async def on_worker_start(
    self, 
    key: str,
    is_top_level: bool = False,
    parent: Optional["Automa"] = None,
    arguments: Dict[str, Any] = None,
) -> None:
    """
    Hook invoked before worker execution.

    Called immediately before the worker runs. Use for arguments
    validation, logging, or monitoring. Cannot modify execution
    arguments or logic.

    Parameters
    ----------
    key : str
        Worker identifier.
    is_top_level: bool = False
        Whether the worker is the top-level automa. When True, parent will be the automa itself (parent is self).
    parent : Optional[Automa] = None
        Parent automa instance containing this worker. For top-level automa, parent is the automa itself.
    arguments : Dict[str, Any] = None
        Execution parameters with keys "args" and "kwargs".
    """
    pass

on_worker_end

async
on_worker_end(
    key: str,
    is_top_level: bool = False,
    parent: Optional[Automa] = None,
    arguments: Dict[str, Any] = None,
    result: Any = None,
) -> None

Hook invoked after worker execution.

Called immediately after the worker completes. Use for result monitoring, logging, event publishing, or validation. Cannot modify execution results or logic.

Parameters:

Name Type Description Default
key str

Worker identifier.

required
is_top_level bool

Whether the worker is the top-level automa. When True, parent will be the automa itself (parent is self).

False
parent Optional[Automa] = None

Parent automa instance containing this worker. For top-level automa, parent is the automa itself.

None
arguments Dict[str, Any] = None

Execution arguments with keys "args" and "kwargs".

None
result Any = None

Worker execution result.

None
Source code in bridgic/core/automa/worker/_worker_callback.py
async def on_worker_end(
    self,
    key: str,
    is_top_level: bool = False,
    parent: Optional["Automa"] = None,
    arguments: Dict[str, Any] = None,
    result: Any = None,
) -> None:
    """
    Hook invoked after worker execution.

    Called immediately after the worker completes. Use for result
    monitoring, logging, event publishing, or validation. Cannot
    modify execution results or logic.

    Parameters
    ----------
    key : str
        Worker identifier.
    is_top_level: bool = False
        Whether the worker is the top-level automa. When True, parent will be the automa itself (parent is self).
    parent : Optional[Automa] = None
        Parent automa instance containing this worker. For top-level automa, parent is the automa itself.
    arguments : Dict[str, Any] = None
        Execution arguments with keys "args" and "kwargs".
    result : Any = None
        Worker execution result.
    """
    pass

on_worker_error

async
on_worker_error(
    key: str,
    is_top_level: bool = False,
    parent: Optional[Automa] = None,
    arguments: Dict[str, Any] = None,
    error: Exception = None,
) -> bool

Hook invoked when worker execution raises an exception.

Called when the worker execution raises an exception. Use for error handling, logging, or event publishing. Cannot modify execution logic or arguments.

Exception Matching Mechanism: How to Handle a Specific Exception

The framework enable your callback to handle a given exception based on the type annotation of the error parameter in your on_worker_error method. The matching follows these rules:

  • The parameter name MUST be error and the type annotation is critical for the matching mechanism.
  • If you annotate error: ValueError, it will match ValueError and all its subclasses (e.g., UnicodeDecodeError).
  • If you annotate error: Exception, it will match all exceptions (since all exceptions inherit from Exception).
  • If you want to match multiple exception types, you can use Union[Type1, Type2, ...].

Return Value: Whether to Suppress the Exception

  • If on_worker_error returns True, the framework will suppress the exception. The framework will then proceed as if there was no error, and the worker result will be set to None.
  • If on_worker_error returns False, the framework will simply observe the error; after all matching callbacks are called, the framework will re-raise the exception.

Special Case: Interaction Exceptions Cannot Be Suppressed

To ensure human-interaction mechanisms work correctly, exceptions of type _InteractionEventException or InteractionException (including their subclasses) CANNOT be suppressed by any callback. Even if your callback returns True, the framework will forcibly re-raise the exception. This ensures these exceptions always propagate correctly through the automa hierarchy to trigger necessary human interactions.

Parameters:

Name Type Description Default
key str

Worker identifier.

required
is_top_level bool

Whether the worker is the top-level automa. When True, parent will be the automa itself (parent is self).

False
parent Optional[Automa] = None

Parent automa instance containing this worker. For top-level automa, parent is the automa itself.

None
arguments Dict[str, Any] = None

Execution arguments with keys "args" and "kwargs".

None
error Exception = None

The exception raised during worker execution. The type annotation of this parameter determines which exceptions this callback will handle. The matching is based on inheritance relationship (using isinstance), so a callback with error: ValueError will match ValueError and all its subclasses.

None

Returns:

Type Description
bool

True if the automa should suppress the exception (not re-raise it); False otherwise.

Source code in bridgic/core/automa/worker/_worker_callback.py
async def on_worker_error(
    self,
    key: str,
    is_top_level: bool = False,
    parent: Optional["Automa"] = None,
    arguments: Dict[str, Any] = None,
    error: Exception = None,
) -> bool:
    """
    Hook invoked when worker execution raises an exception.

    Called when the worker execution raises an exception. Use for error handling, logging, 
    or event publishing. Cannot modify execution logic or arguments.

    **Exception Matching Mechanism: How to Handle a Specific Exception**

    The framework enable your callback to handle a given exception based on the 
    **type annotation** of the `error` parameter in your `on_worker_error` method.
    The matching follows these rules:

    - The parameter name MUST be `error` and the type annotation is critical for the 
      matching mechanism.
    - If you annotate `error: ValueError`, it will match `ValueError` and all its 
      subclasses (e.g., `UnicodeDecodeError`).
    - If you annotate `error: Exception`, it will match all exceptions (since all exceptions 
      inherit from Exception).
    - If you want to match multiple exception types, you can use `Union[Type1, Type2, ...]`.


    **Return Value: Whether to Suppress the Exception**

    - If `on_worker_error` returns `True`, the framework will suppress the exception. 
      The framework will then proceed as if there was no error, and the worker result 
      will be set to None.
    - If `on_worker_error` returns `False`, the framework will simply observe the error; 
      after all matching callbacks are called, the framework will re-raise the exception.

    **Special Case: Interaction Exceptions Cannot Be Suppressed**

    To ensure human-interaction mechanisms work correctly, exceptions of type
    `_InteractionEventException` or `InteractionException` (including their subclasses) 
    **CANNOT** be suppressed by any callback. Even if your callback returns `True`, the 
    framework will forcibly re-raise the exception. This ensures these exceptions always 
    propagate correctly through the automa hierarchy to trigger necessary human interactions.

    Parameters
    ----------
    key : str
        Worker identifier.
    is_top_level: bool = False
        Whether the worker is the top-level automa. When True, parent will be the automa itself (parent is self).
    parent : Optional[Automa] = None
        Parent automa instance containing this worker. For top-level automa, parent is the automa itself.
    arguments : Dict[str, Any] = None
        Execution arguments with keys "args" and "kwargs".
    error : Exception = None
        The exception raised during worker execution. The type annotation of this
        parameter determines which exceptions this callback will handle. The matching
        is based on inheritance relationship (using isinstance), so a callback with
        `error: ValueError` will match ValueError and all its subclasses.

    Returns
    -------
    bool
        True if the automa should suppress the exception (not re-raise it); False otherwise.
    """
    return False

WorkerCallbackBuilder

Bases: Generic[T_WorkerCallback]

Builder class for creating instances of WorkerCallback subclasses.

This builder is designed to construct instances of subclasses of WorkerCallback. The _callback_type parameter should be a subclass of WorkerCallback, and build() will return an instance of that specific subclass. There is no need to call build() directly. Instead, the framework calls the build method automatically to create its own WorkerCallback instance for each worker instance.

Notes

Register a Callback in Different Scope

There are three ways to register a callback for three levels of customization:

  • Case 1: Use in worker decorator to register the callback for a specific worker.
  • Case 2: Use in RunningOptions to register the callback for a specific Automa instance.
  • Case 3: Use in GlobalSetting to register the callback for all workers.
Notes

Shared Instance Mode

  • When is_shared=True (default), all workers within the same scope will share the same callback instance. This is useful for scenarios where a single callback instance is needed to maintain some state across workers within the same scope, such as the connection to an external service. The scope is determined by where the builder is declared:
  • If declared in GlobalSetting: shared across all workers globally
  • If declared in RunningOptions: shared across all workers within that Automa instance
  • When is_shared=False, each worker will get its own callback instance. This is useful for scenarios where a independent callback instance is needed for each worker.

Examples:

There are three ways to use the builder, for different levels of customization:

>>> # Define a custom callback class:
>>> class MyEmptyCallback(WorkerCallback):
...     pass
...
>>> # Case 1: Use in worker decorator to register the callback for a specific worker:
>>> class MyGraphAutoma(GraphAutoma):
...     @worker(callback_builders=[WorkerCallbackBuilder(MyEmptyCallback)])
...     async def my_worker(self, x: int) -> int:
...         return x + 1
...
>>> # Case 2: Use in RunningOptions to register the callback for a specific Automa instance:
...     running_options = RunningOptions(callback_builders=[WorkerCallbackBuilder(MyEmptyCallback)])
...     graph = MyGraphAutoma(running_options=running_options)
...
>>> # Case 3: Use in GlobalSetting to register the callback for all workers:
>>> GlobalSetting.set(callback_builders=[WorkerCallbackBuilder(MyEmptyCallback)])
Source code in bridgic/core/automa/worker/_worker_callback.py
class WorkerCallbackBuilder(Generic[T_WorkerCallback]):
    """
    Builder class for creating instances of `WorkerCallback` subclasses.

    This builder is designed to construct instances of subclasses of `WorkerCallback`.
    The `_callback_type` parameter should be a subclass of `WorkerCallback`, and `build()` 
    will return an instance of that specific subclass. There is no need to call `build()` 
    directly. Instead, the framework calls the `build` method automatically to create 
    its own `WorkerCallback` instance for each worker instance.

    Notes
    -----
    **Register a Callback in Different Scope**

    There are three ways to register a callback for three levels of customization:

    - Case 1: Use in worker decorator to register the callback for a specific worker.
    - Case 2: Use in RunningOptions to register the callback for a specific Automa instance.
    - Case 3: Use in GlobalSetting to register the callback for all workers.

    Notes
    -----
    **Shared Instance Mode**

    - When `is_shared=True` (default), all workers within the same scope will share the same 
      callback instance. This is useful for scenarios where a single callback instance is needed 
      to maintain some state across workers within the same scope, such as the connection to 
      an external service. The scope is determined by where the builder is declared:
      - If declared in GlobalSetting: shared across all workers globally
      - If declared in RunningOptions: shared across all workers within that Automa instance
    - When `is_shared=False`, each worker will get its own callback instance. This is useful for 
      scenarios where a independent callback instance is needed for each worker.

    Examples
    --------
    There are three ways to use the builder, for different levels of customization:

    >>> # Define a custom callback class:
    >>> class MyEmptyCallback(WorkerCallback):
    ...     pass
    ...
    >>> # Case 1: Use in worker decorator to register the callback for a specific worker:
    >>> class MyGraphAutoma(GraphAutoma):
    ...     @worker(callback_builders=[WorkerCallbackBuilder(MyEmptyCallback)])
    ...     async def my_worker(self, x: int) -> int:
    ...         return x + 1
    ...
    >>> # Case 2: Use in RunningOptions to register the callback for a specific Automa instance:
    ...     running_options = RunningOptions(callback_builders=[WorkerCallbackBuilder(MyEmptyCallback)])
    ...     graph = MyGraphAutoma(running_options=running_options)
    ...
    >>> # Case 3: Use in GlobalSetting to register the callback for all workers:
    >>> GlobalSetting.set(callback_builders=[WorkerCallbackBuilder(MyEmptyCallback)])
    """
    _callback_type: Type[T_WorkerCallback]
    """The specific subclass of `WorkerCallback` to instantiate."""
    _init_kwargs: Dict[str, Any]
    """The initialization arguments for the instance."""
    _is_shared: bool
    """Whether to use shared instance mode (reuse the same instance within the declaration scope)."""

    _shared_instance: Optional[T_WorkerCallback] = None
    """Shared instance of the callback within the declaration scope."""
    _shared_lock: Lock = Lock()
    """Lock for thread-safe shared instance creation."""

    def __init__(
        self,
        callback_type: Type[T_WorkerCallback],
        init_kwargs: Optional[Dict[str, Any]] = None,
        is_shared: bool = True,
    ):
        """
        Initialize the builder with a `WorkerCallback` subclass type.

        Parameters
        ----------
        callback_type : Type[T_WorkerCallback]
            A subclass of `WorkerCallback` to be instantiated.
        init_kwargs : Optional[Dict[str, Any]]
            Keyword arguments to pass to the subclass constructor.
        is_shared : bool, default True
            If True, the callback instance will be shared within the declaration scope:
            If False, each worker will get its own callback instance.
        """
        self._callback_type = callback_type
        self._init_kwargs = init_kwargs or {}
        self._is_shared = is_shared

    def build(self) -> T_WorkerCallback:
        """
        Build and return an instance of the specified `WorkerCallback` subclass.

        Returns
        -------
        T_WorkerCallback
            An instance of the `WorkerCallback` subclass specified during initialization.
        """
        if self._is_shared:
            if self._shared_instance is None:
                with self._shared_lock:
                    if self._shared_instance is None:
                        self._shared_instance = self._callback_type(**self._init_kwargs)
            return self._shared_instance
        else:
            return self._callback_type(**self._init_kwargs)

    @override
    def dump_to_dict(self) -> Dict[str, Any]:
        return {
            "callback_type": self._callback_type.__module__ + "." + self._callback_type.__qualname__,
            "init_kwargs": self._init_kwargs,
            "is_shared": self._is_shared,
        }

    @override
    def load_from_dict(self, state_dict: Dict[str, Any]) -> None:
        # Load the callback type from its fully qualified name
        callback_type_name = state_dict["callback_type"]
        self._callback_type = load_qualified_class_or_func(callback_type_name)

        # Load init_kwargs (default to empty dict if not present or None)
        init_kwargs = state_dict.get("init_kwargs")
        self._init_kwargs = init_kwargs if init_kwargs is not None else {}

        # Load is_shared (default to True if not present for backward compatibility)
        self._is_shared = state_dict.get("is_shared", True)

        # Reset shared instance and lock (they will be recreated when needed)
        self._shared_instance = None
        self._shared_lock = Lock()

build

build() -> T_WorkerCallback

Build and return an instance of the specified WorkerCallback subclass.

Returns:

Type Description
T_WorkerCallback

An instance of the WorkerCallback subclass specified during initialization.

Source code in bridgic/core/automa/worker/_worker_callback.py
def build(self) -> T_WorkerCallback:
    """
    Build and return an instance of the specified `WorkerCallback` subclass.

    Returns
    -------
    T_WorkerCallback
        An instance of the `WorkerCallback` subclass specified during initialization.
    """
    if self._is_shared:
        if self._shared_instance is None:
            with self._shared_lock:
                if self._shared_instance is None:
                    self._shared_instance = self._callback_type(**self._init_kwargs)
        return self._shared_instance
    else:
        return self._callback_type(**self._init_kwargs)