Context & Exposure¶
An agent's decisions are only as good as the information it receives. Context is the agent's global shared state — it holds everything the agent knows. Exposure strategies control how that state is presented to the LLM: all at once, or progressively revealed on demand.
In this tutorial, we'll build a document analysis agent that manages multiple skills and a growing execution history. We'll see how Exposure strategies prevent information overload while keeping details available when needed.
Initialize¶
First, let's set up the LLM and define the tools our document analysis agent will use.
import os
model_name = os.environ.get("MODEL_NAME")
api_key = os.environ.get("API_KEY")
api_base = os.environ.get("BASE_URL")
from bridgic.llms.openai import OpenAILlm, OpenAIConfiguration
llm = OpenAILlm(
api_key=api_key,
api_base=api_base,
timeout=120,
configuration=OpenAIConfiguration(
model=model_name,
temperature=0.0,
max_tokens=16384,
),
)
from bridgic.core.agentic.tool_specs import FunctionToolSpec
async def read_document(doc_name: str) -> str:
"""Read the content of a document by its name"""
docs = {
"quarterly_report": "Q3 Revenue: $2.5M, Growth: 15%, New customers: 120...",
"competitor_analysis": "Main competitor launched new product, market share shift...",
"team_updates": "Engineering: shipped v2.0, Marketing: campaign launched...",
}
return docs.get(doc_name, f"Document '{doc_name}' not found")
async def summarize(text: str) -> str:
"""Summarize a text into key bullet points"""
return f"Summary: Key points extracted from the given text ({len(text)} chars)"
async def extract_actions(text: str) -> str:
"""Extract action items from text"""
return f"Action items: [Review budget, Schedule follow-up, Update roadmap]"
read_document_tool = FunctionToolSpec.from_raw(read_document)
summarize_tool = FunctionToolSpec.from_raw(summarize)
extract_actions_tool = FunctionToolSpec.from_raw(extract_actions)
We have three tools:
read_document— retrieves the content of a document by name from a mock document store.summarize— takes a text and returns key bullet points.extract_actions— extracts action items from a given text.
Together, these tools let our agent read, summarize, and extract actionable insights from documents.
Part 1: Exposure — How Data Is Disclosed to the LLM¶
Exposure strategies determine how context data is presented to the LLM. Bridgic Amphibious provides two strategies:
EntireExposure¶
All data is visible at once. The LLM sees everything in a single prompt. This is used for tools in CognitiveContext — the agent always sees all available tool specifications.
LayeredExposure¶
Progressive disclosure: the LLM first sees summaries, then can request details through the Acquiring cognitive policy. This is used for skills and cognitive_history — it prevents token waste on information the LLM might not need.
For cognitive_history, LayeredExposure creates a tiered memory system:
| Tier | What the LLM sees |
|---|---|
| Working memory | Recent steps shown in full detail |
| Short-term memory | Older steps shown as summaries |
| Long-term memory | Oldest steps compressed into a paragraph |
from bridgic.amphibious import (
EntireExposure, LayeredExposure,
CognitiveContext, Context,
)
# EntireExposure: everything is visible
# CognitiveContext.tools uses EntireExposure
# → LLM sees all tool specs in every prompt
# LayeredExposure: summary first, details on demand
# CognitiveContext.skills uses LayeredExposure
# → LLM sees skill names/summaries initially
# → Can request full content via Acquiring policy
# CognitiveContext.cognitive_history uses LayeredExposure
# → Recent steps shown in full (working memory)
# → Older steps shown as summaries (short-term memory)
# → Oldest steps compressed into a paragraph (long-term memory)
print("EntireExposure:", EntireExposure)
print("LayeredExposure:", LayeredExposure)
EntireExposure: <class 'bridgic.amphibious._context.EntireExposure'> LayeredExposure: <class 'bridgic.amphibious._context.LayeredExposure'>
The key insight is that EntireExposure is simple and fast — great when the data is small (like a list of tool specs). But as data grows (like execution history), LayeredExposure becomes essential to avoid flooding the LLM's context window with irrelevant details.
Part 2: Context — The Agent's Shared State¶
Context is a Pydantic BaseModel that automatically detects Exposure fields and provides summary(), get_details(), and format_summary() methods.
CognitiveContext is the default implementation that ships with Bridgic Amphibious. It includes the following fields:
| Field | Type | Exposure | Purpose |
|---|---|---|---|
goal | str | — | What the agent is trying to achieve |
tools | CognitiveTools | EntireExposure | Available tools |
skills | CognitiveSkills | LayeredExposure | Available skills |
cognitive_history | CognitiveHistory | LayeredExposure | Execution history |
observation | Optional[str] | — | Current observation |
from bridgic.amphibious import CognitiveContext
# Create a context and inspect its summary
ctx = CognitiveContext(goal="Analyze quarterly documents")
print(ctx.summary())
{'goal': 'Goal: Analyze quarterly documents', 'cognitive_history': 'Execution History: (none)'}
# format_summary() creates the text that the LLM actually sees
formatted = ctx.format_summary()
print(formatted)
Goal: Analyze quarterly documents Execution History: (none)
summary() returns a dictionary of field summaries, while format_summary() produces the final text string that gets injected into the LLM prompt. This is the agent's window into its own state — anything not in this output is invisible to the LLM.
Part 3: Custom Context — Carrying Business State¶
When your agent needs domain-specific state beyond goal/tools/skills/history, you can extend CognitiveContext with your own fields. These custom fields automatically appear in the summary shown to the LLM.
from pydantic import Field, ConfigDict
from bridgic.amphibious import CognitiveContext
class DocumentAnalysisContext(CognitiveContext):
model_config = ConfigDict(arbitrary_types_allowed=True)
current_document: str = Field(
default="",
description="Name of the document currently being analyzed"
)
analysis_results: dict = Field(
default_factory=dict,
description="Accumulated analysis results keyed by document name"
)
priority_level: str = Field(
default="normal",
description="Priority level for the current analysis task"
)
Now let's build an agent that uses our custom context and the after_action hook to keep custom fields updated after each tool execution.
from bridgic.amphibious import AmphibiousAutoma, CognitiveWorker, think_unit
from bridgic.amphibious._type import ActionResult
class DocumentAnalyzer(AmphibiousAutoma[DocumentAnalysisContext]):
analyzer = think_unit(
CognitiveWorker.inline(
"Read and analyze the current document. "
"Extract key findings and action items."
),
max_attempts=4,
)
async def after_action(self, step_result, ctx: DocumentAnalysisContext):
"""Update custom context fields based on tool results."""
action_result = step_result.result
if not isinstance(action_result, ActionResult):
return
for step in action_result.results:
if not step.success:
continue
if step.tool_name == "read_document":
doc_name = step.tool_arguments.get("doc_name", "")
ctx.current_document = doc_name
ctx.analysis_results[doc_name] = step.tool_result
elif step.tool_name == "extract_actions":
if ctx.current_document:
ctx.analysis_results[f"{ctx.current_document}_actions"] = step.tool_result
async def on_agent(self, ctx: DocumentAnalysisContext):
await self.analyzer
agent = DocumentAnalyzer(llm=llm, verbose=True)
result = await agent.arun(
goal="Analyze the quarterly report and competitor analysis, then extract action items from both",
tools=[read_document_tool, summarize_tool, extract_actions_tool],
)
print(agent.context)
[16:46:44.826] [Router] (_amphibious_automa.py:1573) Auto-detecting execution mode [16:46:44.827] [Router] (_amphibious_automa.py:1579) Detected AGENT mode [16:46:44.828] [Observe] (_amphibious_automa.py:861) _PromptWorker: None [16:46:50.277] [Think] (_amphibious_automa.py:867) _PromptWorker: finish=False, step=Starting the analysis task. I need to read both the quarterly report and competitor analysis documents. Since no document is currently loaded, I'll begin by reading the quarterly report first, then proceed to the competitor analysis. After reading both, I'll summarize and extract action items from each. [16:46:50.279] [Act] (_amphibious_automa.py:873) _PromptWorker: { "content": "Starting the analysis task. I need to read both the quarterly report and competitor analysis documents. Since no document is currently loaded, I'll begin by reading the quarterly report first, then proceed to the competitor analysis. After reading both, I'll summarize and extract action items from each.", "result": { "results": [ { "tool_id": "call_0", "tool_name": "read_document", "tool_arguments": { "doc_name": "quarterly_report" }, "tool_result": "Q3 Revenue: $2.5M, Growth: 15%, New customers: 120...", "success": true, "error": null } ] }, "metadata": {}, "status": null } [16:46:50.280] [Observe] (_amphibious_automa.py:861) _PromptWorker: None [16:47:00.190] [Think] (_amphibious_automa.py:867) _PromptWorker: finish=False, step=The quarterly report has been successfully read and stored in analysis_results. The next step is to read the competitor analysis document to gather all necessary data before summarizing and extracting action items from both documents. [16:47:00.192] [Act] (_amphibious_automa.py:873) _PromptWorker: { "content": "The quarterly report has been successfully read and stored in analysis_results. The next step is to read the competitor analysis document to gather all necessary data before summarizing and extracting action items from both documents.", "result": { "results": [ { "tool_id": "call_0", "tool_name": "read_document", "tool_arguments": { "doc_name": "competitor_analysis" }, "tool_result": "Main competitor launched new product, market share shift...", "success": true, "error": null } ] }, "metadata": {}, "status": null } [16:47:00.193] [Observe] (_amphibious_automa.py:861) _PromptWorker: None [16:48:41.736] [Think] (_amphibious_automa.py:867) _PromptWorker: finish=False, step=Both documents have been successfully read. The next step is to summarize each document to extract key findings and then extract action items from their content. This will provide the necessary insights and actionable steps from both the quarterly report and competitor analysis. [16:48:41.738] [Act] (_amphibious_automa.py:873) _PromptWorker: { "content": "Both documents have been successfully read. The next step is to summarize each document to extract key findings and then extract action items from their content. This will provide the necessary insights and actionable steps from both the quarterly report and competitor analysis.", "result": { "results": [ { "tool_id": "call_0", "tool_name": "summarize", "tool_arguments": { "text": "Q3 Revenue: $2.5M, Growth: 15%, New customers: 120..." }, "tool_result": "Summary: Key points extracted from the given text (53 chars)", "success": true, "error": null }, { "tool_id": "call_1", "tool_name": "extract_actions", "tool_arguments": { "text": "Q3 Revenue: $2.5M, Growth: 15%, New customers: 120..." }, "tool_result": "Action items: [Review budget, Schedule follow-up, Update roadmap]", "success": true, "error": null }, { "tool_id": "call_2", "tool_name": "summarize", "tool_arguments": { "text": "Main competitor launched new product, market share shift..." }, "tool_result": "Summary: Key points extracted from the given text (59 chars)", "success": true, "error": null }, { "tool_id": "call_3", "tool_name": "extract_actions", "tool_arguments": { "text": "Main competitor launched new product, market share shift..." }, "tool_result": "Action items: [Review budget, Schedule follow-up, Update roadmap]", "success": true, "error": null } ] }, "metadata": {}, "status": null } [16:48:41.739] [Observe] (_amphibious_automa.py:861) _PromptWorker: None [16:49:08.550] [Think] (_amphibious_automa.py:867) _PromptWorker: finish=False, step=The competitor_analysis document has been read, but its content hasn't been summarized or had action items extracted yet. The quarterly report's analysis is complete. Next, I need to summarize the competitor_analysis text and extract its action items to fulfill the goal of analyzing both documents. [16:49:08.553] [Act] (_amphibious_automa.py:873) _PromptWorker: { "content": "The competitor_analysis document has been read, but its content hasn't been summarized or had action items extracted yet. The quarterly report's analysis is complete. Next, I need to summarize the competitor_analysis text and extract its action items to fulfill the goal of analyzing both documents.", "result": { "results": [ { "tool_id": "call_0", "tool_name": "summarize", "tool_arguments": { "text": "Main competitor launched new product, market share shift..." }, "tool_result": "Summary: Key points extracted from the given text (59 chars)", "success": true, "error": null }, { "tool_id": "call_1", "tool_name": "extract_actions", "tool_arguments": { "text": "Main competitor launched new product, market share shift..." }, "tool_result": "Action items: [Review budget, Schedule follow-up, Update roadmap]", "success": true, "error": null } ] }, "metadata": {}, "status": null } ================================================== DocumentAnalyzer-826e32d9 | Completed Tokens: 2688 | Time: 143.73s ================================================== ================================================== DocumentAnalysisContext ================================================== [goal] Analyze the quarterly report and competitor analysis, then extract action items from both [current_document] competitor_analysis [analysis_results] {'quarterly_report': 'Q3 Revenue: $2.5M, Growth: 15%, New customers: 120...', 'competitor_analysis': 'Main competitor launched new product, market share shift...', 'competitor_analysis_actions': 'Action items: [Review budget, Schedule follow-up, Update roadmap]'} [priority_level] normal [tools] (entire) [0] • read_document: Read the content of a document by its name [1] • summarize: Summarize a text into key bullet points [2] • extract_actions: Extract action items from text [skills] (layered) (empty) [cognitive_history] (layered) [0] [Working Memory (0-3)] [1] [0] Starting the analysis task. I need to read both the quarterly report and competitor analysis documents. Since no document is currently loaded, I'll begin by reading the quarterly report first, then proceed to the competitor analysis. After reading both, I'll summarize and extract action items from each. Result: results=[ActionStepResult(tool_id='call_0', tool_name='read_document', tool_arguments={'doc_name': 'quarterly_report'}, tool_result='Q3 Revenue: $2.5M, Growth: 15%, New customers: 120...', success=True, error=None)] [2] [1] The quarterly report has been successfully read and stored in analysis_results. The next step is to read the competitor analysis document to gather all necessary data before summarizing and extracting action items from both documents. Result: results=[ActionStepResult(tool_id='call_0', tool_name='read_document', tool_arguments={'doc_name': 'competitor_analysis'}, tool_result='Main competitor launched new product, market share shift...', success=True, error=None)] [3] [2] Both documents have been successfully read. The next step is to summarize each document to extract key findings and then extract action items from their content. This will provide the necessary insights and actionable steps from both the quarterly report and competitor analysis. Result: results=[ActionStepResult(tool_id='call_0', tool_name='summarize', tool_arguments={'text': 'Q3 Revenue: $2.5M, Growth: 15%, New customers: 120...'}, tool_result='Summary: Key points extracted from the given text (53 chars)', success=True, error=None), ActionStepResult(tool_id='call_1', tool_name='extract_actions', tool_arguments={'text': 'Q3 Revenue: $2.5M, Growth: 15%, New customers: 120...'}, tool_result='Action items: [Review budget, Schedule follow-up, Update roadmap]', success=True, error=N... [4] [3] The competitor_analysis document has been read, but its content hasn't been summarized or had action items extracted yet. The quarterly report's analysis is complete. Next, I need to summarize the competitor_analysis text and extract its action items to fulfill the goal of analyzing both documents. Result: results=[ActionStepResult(tool_id='call_0', tool_name='summarize', tool_arguments={'text': 'Main competitor launched new product, market share shift...'}, tool_result='Summary: Key points extracted from the given text (59 chars)', success=True, error=None), ActionStepResult(tool_id='call_1', tool_name='extract_actions', tool_arguments={'text': 'Main competitor launched new product, market share shift...'}, tool_result='Action items: [Review budget, Schedule follow-up, Update roadmap]', success=T... ==================================================
The after_action hook fires after each tool execution and updates current_document and analysis_results based on the tool's return value. This way, the LLM sees the updated custom fields in subsequent steps — it knows which documents have been analyzed and can adjust its plan accordingly.
Tip:
after_actionfollows the same worker → agent delegation pattern asbefore_action. If a worker'safter_actionreturns_DELEGATE, the framework falls through to the agent-level hook. See the Customizing the OTC Cycle tutorial for the full hook reference.
# Custom fields appear in the context summary shown to the LLM
ctx = DocumentAnalysisContext(
goal="Analyze documents",
current_document="quarterly_report",
priority_level="high",
)
print(ctx.format_summary())
Goal: Analyze documents
current_document (Name of the document currently being analyzed):
quarterly_report
analysis_results (Accumulated analysis results keyed by document name):
{}
priority_level (Priority level for the current analysis task):
high
Execution History: (none)
Notice how current_document and priority_level appear in the formatted summary. The LLM now has access to this domain-specific state when making decisions, without any extra prompt engineering on your part.
What have we learnt?¶
In this tutorial, we explored how Context and Exposure control the information flow to the LLM:
- EntireExposure shows all data at once — good for short lists like tool specs.
- LayeredExposure reveals summaries first and provides details on demand — essential for managing large skill sets and execution history without overwhelming the LLM.
- CognitiveContext is the default context with
goal,tools(EntireExposure),skills(LayeredExposure), andcognitive_history(LayeredExposure). - Custom Context: Extend
CognitiveContextwith domain-specific fields using PydanticField. These fields automatically appear in the summary shown to the LLM. after_actionhook: Use it to keep custom context fields in sync with tool results — the LLM sees updated state in subsequent steps.- Use
summary()andformat_summary()to inspect what the LLM actually sees.
Next in the advanced tutorials, we'll see how to customize the OTC cycle hooks for fine-grained control over agent behavior.