sherma is built on the A2A (Agent-to-Agent) protocol as its agent communication layer. Every sherma agent speaks A2A natively, making it interoperable with any A2A-compatible agent regardless of the framework it was built with.
graph TD
Client["A2A Client<br/>(any agent)"] <-->|A2A Messages| Server["A2A Server<br/>(sherma)"]
Server --> Executor["ShermaAgentExecutor"]
Executor --> Agent["Agent<br/>(LangGraph)"]
ShermaAgentExecutor is the bridge between the A2A server protocol and sherma’s Agent interface. It implements the A2A SDK’s AgentExecutor interface.
from sherma.a2a import ShermaAgentExecutor
executor = ShermaAgentExecutor(agent)
TaskUpdater to manage task state transitions.agent.send_message() and processes the response stream.input_schema or output_schema, validates incoming/outgoing DataPart messages against those schemas.Message – completes the task with the responseTask – logs the initial task eventTaskArtifactUpdateEvent – forwards artifacts to the task updaterTaskStatusUpdateEvent – updates task status (including input_required for interrupts)New message → create Task → start_work → send_message → process events → complete/cancel/failed
If no events are received from the agent, the task completes with no message.
If agent.send_message() raises an exception during execution, ShermaAgentExecutor catches it and transitions the task to a failed state via task_updater.failed(). The error message is sent as an A2A Message with role agent containing the exception text.
send_message raises → log error → task_updater.failed(message=error_message)
This ensures the A2A client always receives a terminal task state, even when the agent encounters an unexpected error. The error is also logged at the ERROR level for server-side observability.
Errors can also be intercepted earlier using the on_error and on_node_error hooks (see Hooks – Error Handling).
To expose a sherma agent as an A2A HTTP server:
from a2a.server.apps import A2AStarletteApplication
from a2a.server.request_handlers import DefaultRequestHandler
from a2a.types import AgentCard, AgentCapabilities
from sherma import DeclarativeAgent
from sherma.a2a import ShermaAgentExecutor
# Create the agent
agent = DeclarativeAgent(
id="my-agent",
version="1.0.0",
yaml_path="agent.yaml",
)
# Wrap in executor
executor = ShermaAgentExecutor(agent)
# Build A2A handler and app
handler = DefaultRequestHandler(agent_executor=executor)
card = AgentCard(
name="My Agent",
description="Does useful things",
url="http://localhost:8000",
version="1.0.0",
capabilities=AgentCapabilities(streaming=False),
)
app = A2AStarletteApplication(agent_card=card, http_handler=handler)
# Serve with uvicorn: uvicorn main:app
Use RemoteAgent to call any A2A-compatible agent:
from sherma import RemoteAgent
remote = RemoteAgent(
id="external-agent",
version="1.0.0",
url="https://agent.example.com",
)
# Register in agent registry for use in declarative agents
from sherma import AgentRegistry
from sherma.registry.base import RegistryEntry
from sherma.types import Protocol
registry = AgentRegistry()
await registry.add(RegistryEntry(
id="external-agent",
version="1.0.0",
remote=True,
url="https://agent.example.com",
protocol=Protocol.A2A,
))
The remote agent uses the A2A Python SDK’s client under the hood. It doesn’t matter what framework the remote agent was built with – any A2A-compatible agent works.
sherma provides lossless bidirectional conversion between A2A and LangGraph message formats.
from sherma.messages.converter import a2a_to_langgraph
lg_messages = a2a_to_langgraph(a2a_message)
# Returns list[BaseMessage] (HumanMessage or AIMessage)
TextPart becomes string contentDataPart becomes a structured content block with type, data, and metadataRole.user maps to HumanMessage, Role.agent to AIMessageadditional_kwargs["a2a_metadata"]from sherma.messages.converter import langgraph_to_a2a
a2a_message = langgraph_to_a2a(lg_message)
# Returns A2A Message
TextPartPart typeadditional_kwargsAgents can declare typed input and output schemas using Pydantic models:
from pydantic import BaseModel
from sherma.langgraph.agent import LangGraphAgent
class WeatherInput(BaseModel):
city: str
units: str = "metric"
class WeatherOutput(BaseModel):
temperature: float
description: str
class MyAgent(LangGraphAgent):
input_schema = WeatherInput
output_schema = WeatherOutput
get_card() automatically injects the JSON schemas as A2A extensions with URIs urn:sherma:schema:input and urn:sherma:schema:outputShermaAgentExecutor validates incoming DataPart messages marked with agent_input: true against input_schema, and outgoing messages marked with agent_output: true against output_schemafrom sherma import (
create_agent_input_as_message_part,
get_agent_input_from_message_part,
create_agent_output_as_message_part,
get_agent_output_from_message_part,
SCHEMA_INPUT_URI,
SCHEMA_OUTPUT_URI,
)
# Create an input message
msg = create_agent_input_as_message_part(
WeatherInput(city="Tokyo"),
SCHEMA_INPUT_URI,
)
# Extract typed input from a message
weather_input = get_agent_input_from_message_part(msg, WeatherInput)
When a LangGraph agent enters an interrupted state (e.g., via an interrupt node or a tool calling interrupt()), send_message handles it as follows:
__interrupt__ key in the graph resultAIMessage from each interrupt value (every interrupt must yield an AIMessage – see the interrupt contract)AIMessages into a single AIMessage using combine_ai_messagesMessage and wraps it in a TaskStatusUpdateEvent with state input_requiredTaskStatusUpdateEvent – no Message event is yielded, so the task stays in a non-terminal stateThis design avoids a race condition where yielding both a Message (which triggers task_updater.complete()) and a TaskStatusUpdateEvent would cause a “task already in terminal state” error.
When the client sends a follow-up message, send_message detects the pending interrupt via aget_state().tasks and resumes execution with Command(resume=messages).
combine_ai_messagesA utility for merging multiple AIMessage instances into one:
from sherma.langgraph.agent import combine_ai_messages
from langchain_core.messages import AIMessage
msgs = [
AIMessage(content="Here's the weather."),
AIMessage(content="Anything else?"),
]
combined = combine_ai_messages(msgs)
# AIMessage(content=["Here's the weather.", "Anything else?"])
Content from each message is collected into list-form content. If the result is a single string block, it collapses to a plain string for simplicity. This is used internally for interrupt handling but is a general-purpose utility.