sherma implements the Agent Skills specification for progressive skill disclosure. Skills are packaged capabilities – documentation, tools, references, and assets – that agents discover and load on demand.
The skill lifecycle follows the progressive disclosure pattern:
list_skills to see what skills are available (names and descriptions only)load_skill_md to read the full skill documentation and activate its toolsThis lets agents start with a lightweight catalog and only load what they need, keeping context windows efficient.
A skill card (skill-card.json) is the discovery manifest for a skill, analogous to an A2A agent card. It declares metadata, file listings, and tool definitions.
{
"id": "weather",
"version": "1.0.0",
"name": "Weather Lookup",
"description": "Get current weather conditions for any city worldwide.",
"base_uri": ".",
"files": [
"SKILL.md",
"references/open-meteo-api.md",
"assets/weather-codes.md"
],
"mcps": {},
"local_tools": {
"get_weather": {
"id": "get_weather",
"version": "1.0.0",
"import_path": "my_tools.get_weather"
}
}
}
| Field | Description |
|---|---|
id, version |
Unique identifier and semver version |
name, description |
Human-readable metadata (shown in list_skills) |
base_uri |
Base path or URL for resolving files |
files |
List of files accessible under base_uri |
mcps |
MCP server definitions for remote tool execution |
local_tools |
Python tool references loaded via import_path |
The main skill documentation file uses markdown with YAML frontmatter:
---
name: Weather Lookup
description: Get current weather conditions for any city worldwide.
license: MIT
---
# Weather Lookup Skill
Use the `get_weather` tool to retrieve current weather for a given city.
## Usage
Call `get_weather(city="<city name>")`. The tool returns a JSON string with:
- Location name and country
- Temperature
- Wind speed and direction
When load_skill_md is called, sherma:
SkillRegistryLocal tools are Python functions referenced by import path:
"local_tools": {
"get_weather": {
"id": "get_weather",
"version": "1.0.0",
"import_path": "my_package.tools.get_weather"
}
}
The import path should point to a @tool-decorated LangChain function.
MCP (Model Context Protocol) tools connect to remote tool servers:
"mcps": {
"my-mcp-server": {
"id": "my-mcp-server",
"version": "1.0.0",
"url": "https://mcp.example.com",
"transport": "streamable-http"
}
}
Supported transports: stdio, sse, streamable-http. sherma uses the langchain-mcp-adapters library to convert MCP tools to LangChain tools.
When skills are declared in a YAML config, sherma creates six LangGraph tools for the LLM:
| Tool | Description |
|---|---|
list_skills() |
List all available skills with id, version, name, description |
load_skill_md(skill_id, version) |
Load SKILL.md and register the skill’s tools |
list_skill_resources(skill_id, version) |
List reference files in the skill |
load_skill_resource(skill_id, resource_path, version) |
Load a specific reference file |
list_skill_assets(skill_id, version) |
List asset files in the skill |
load_skill_asset(skill_id, asset_path, version) |
Load a specific asset file |
These tools are created by create_skill_tools() and registered automatically when skills are present in the declarative config.
skills:
- id: weather
version: "1.0.0"
skill_card_path: ../skills/weather/skill-card.json # Relative to YAML file
Relative skill_card_path values are resolved against the YAML file’s directory (i.e., base_path). When using yaml_content instead of yaml_path, set base_path explicitly on the DeclarativeAgent. Absolute paths work regardless. See Path Resolution.
Use list_skills and load_skill_md as tools on a call_llm node:
nodes:
- name: discover
type: call_llm
args:
llm: { id: my-llm, version: "1.0.0" }
prompt: 'prompts["discover"]["instructions"]'
tools:
- id: list_skills
- id: load_skill_md
After discovery, use use_tools_from_loaded_skills to bind whatever tools were loaded:
nodes:
- name: execute
type: call_llm
args:
llm: { id: my-llm, version: "1.0.0" }
prompt: 'prompts["execute"]["instructions"]'
use_tools_from_loaded_skills: true
sherma tracks which tools were loaded by which skills in an internal state key (__sherma__). When use_tools_from_loaded_skills is true, only tools associated with loaded skills are bound to the LLM.
Skills can be local (files on disk) or remote (served over HTTP):
Local – Set base_uri to a relative or absolute path. Files are read from the filesystem.
{
"base_uri": "./skills/weather",
"files": ["SKILL.md", "references/api.md"]
}
Remote – Set base_uri to a URL. Files are fetched via HTTP GET from base_uri + "/" + file_path.
{
"base_uri": "https://skills.example.com/weather",
"files": ["SKILL.md", "references/api.md"]
}
You can also use skill tools outside of declarative agents:
from sherma import create_skill_tools
from sherma.registry.skill import SkillRegistry
from sherma.registry.tool import ToolRegistry
skill_registry = SkillRegistry()
tool_registry = ToolRegistry()
# Register skills (with skill_card attribute)...
# Then create tools:
tools = create_skill_tools(
skill_registry=skill_registry,
tool_registry=tool_registry,
hook_manager=hook_manager, # Optional
)
The returned tools can be bound to any LangChain/LangGraph model.