🧰 Skills
Skills are one of the two sources Agent Codemode uses to generate programmatic tools (the other being MCP Servers).
Skills are reusable code patterns that compose MCP tools to accomplish specific tasks. They allow agents to build up a toolbox of higher-level operations. Agent Codemode integrates with the agent-skills package for comprehensive skill management.
Skills APIs live in agent-skills. Import skill managers and helpers from agent_skills.
How It Works
Skills can be used in two complementary ways:
Pattern A: Direct Skill Toolset
The AgentSkillsToolset from agent-skills provides skill tools (list_skills, load_skill, run_skill_script, etc.) as a separate Pydantic AI toolset. This is the standalone pattern.
┌─────────────────┐ ┌─────────────────┐ ┌─────────────────────────────┐
│ Skill Files │ ──▶ │ Skills Manager │ ──▶ │ Callable Python Functions │
│ (skills/*.py) │ │ (discovers │ │ from skills.batch_process │
│ or saved via │ │ and loads) │ │ import batch_process │
│ manager │ │ │ │ │
└─────────────────┘ └─────────────────┘ └─────────────────────────────┘
Pattern B: Skills as Generated Bindings (Codemode Integration)
When skills are wired into Codemode, Agent Codemode generates typed Python bindings for skill operations — the same way it generates bindings for MCP server tools. This allows agents to call skill tools from within execute_code, using the same import pattern:
┌─────────────────┐ ┌─────────────────────┐ ┌─────────────────────────────────────┐
│ AgentSkills │ ──▶ │ generate_skill_ │ ──▶ │ Generated Python Bindings │
│ Toolset │ │ bindings() │ │ from generated.skills │
│ (list, load, │ │ (codegen.py) │ │ import list_skills, run_skill │
│ run, read) │ │ │ │ │
└─────────────────┘ └─────────────────────┘ └─────────────────────────────────────┘
This pattern is used by agent-runtimes when both Codemode and Skills are enabled. The skill tools are automatically wired into Codemode via wire_skills_into_codemode(), so agents can discover and call skill operations inside Python code alongside MCP tools.
Generated structure:
generated/
├── mcp/
│ ├── filesystem.py # MCP server bindings
│ ├── web.py # MCP server bindings
│ └── skills/ # Skill bindings (auto-generated)
│ ├── __init__.py
│ ├── list_skills.py
│ ├── load_skill.py
│ ├── read_skill_resource.py
│ └── run_skill.py
Usage in execute_code:
from generated.skills import list_skills, load_skill, run_skill
# Discover available skills
skills = await list_skills({})
# Load a skill's instructions
instructions = await load_skill({"skill_name": "pdf-extractor"})
# Run a skill script
result = await run_skill({
"skill_name": "pdf-extractor",
"script_name": "extract.py",
"args": {"path": "/data/report.pdf"}
})
Lifecycle
- Create - Write skill files in
skills/directory or save viaagent_skills.SimpleSkillsManager - Discover - Agent Codemode discovers all available skills
- Load - Skills are loaded as callable Python functions
- Use - Import and call skills like any Python module (or via generated bindings in Codemode)
Overview
Skills enable AI agents to:
- Save useful patterns: Store code that works well for reuse
- Build a toolbox: Accumulate capabilities over time
- Compose operations: Chain multiple tools into higher-level functions
- Share knowledge: Export skills for use by other agents
Skills as Code Files
The primary pattern is organizing skills as Python files in a skills/ directory:
# skills/batch_process.py
"""Process all files in a directory."""
async def batch_process(input_dir: str, output_dir: str) -> dict:
"""Process all files in a directory.
Args:
input_dir: Input directory path.
output_dir: Output directory path.
Returns:
Processing statistics.
"""
from generated.mcp.filesystem import list_directory, read_file, write_file
entries = await list_directory({"path": input_dir})
processed = 0
for entry in entries.get("entries", []):
content = await read_file({"path": f"{input_dir}/{entry}"})
# Process content...
await write_file({"path": f"{output_dir}/{entry}", "content": content.upper()})
processed += 1
return {"processed": processed}
Skills as Standalone CLI Tools
Skills can also be designed as standalone command-line tools that can be run independently:
#!/usr/bin/env python
# skills/analyze_file.py
"""
Standalone CLI tool that can be run with: python skills/analyze_file.py <file-path>
This demonstrates a skill that can be invoked directly from the command line.
"""
import asyncio
import sys
import json
async def analyze_file(file_path: str) -> dict:
"""Analyze a file and return statistics."""
from generated.mcp.filesystem import read_file
content = await read_file({"path": file_path})
lines = content.split('\n')
word_count = len(content.split())
char_count = len(content)
return {
"lines": len(lines),
"words": word_count,
"chars": char_count,
"blank_lines": sum(1 for line in lines if line.strip() == '')
}
# Run if called directly
if __name__ == "__main__":
if len(sys.argv) < 2:
print("Usage: python analyze_file.py <file-path>")
sys.exit(1)
file_path = sys.argv[1]
result = asyncio.run(analyze_file(file_path))
print(json.dumps(result, indent=2))
Run it directly:
python skills/analyze_file.py /data/myfile.txt
# Output: {"lines": 42, "words": 256, "chars": 1024, "blank_lines": 5}
Or import and use it in executed code:
from skills.analyze_file import analyze_file
stats = await analyze_file("/data/myfile.txt")
print(f"File has {stats['lines']} lines")
Helper Utilities as Skills
Skills can also provide reusable helper utilities for other skills:
# skills/helpers.py
"""Helper utilities for skill composition."""
async def wait_for(
condition,
interval_seconds: float = 1.0,
timeout_seconds: float = float('inf')
) -> None:
"""Wait for an async condition to become true."""
import time
start_time = time.time()
while True:
result = await condition()
if result:
return
if time.time() - start_time > timeout_seconds:
raise TimeoutError(f"Timeout waiting for condition after {timeout_seconds}s")
await asyncio.sleep(interval_seconds)
async def retry(fn, max_attempts: int = 3, delay_seconds: float = 1.0):
"""Retry a function until it succeeds or max attempts reached."""
last_error = None
for attempt in range(1, max_attempts + 1):
try:
return await fn()
except Exception as e:
last_error = e
if attempt < max_attempts:
await asyncio.sleep(delay_seconds)
raise RuntimeError(f"Failed after {max_attempts} attempts: {last_error}")
Use these helpers in other skills:
from skills.helpers import retry, wait_for
from generated.mcp.web import fetch_url
# Retry a flaky API call
data = await retry(lambda: fetch_url({"url": "https://api.example.com/data"}))
# Wait for a condition
await wait_for(lambda: check_file_exists("/output/result.json"))
Using Skills in Executed Code
Skills are imported and called like any Python module within execute_code:
# In executed code
from skills.batch_process import batch_process
result = await batch_process("/data/input", "/data/output")
print(f"Processed {result['processed']} files")
SimpleSkillsManager
For programmatic skill management, use the SimpleSkillsManager from agent-skills:
from agent_skills import SimpleSkillsManager, SimpleSkill
# Create a skills manager
manager = SimpleSkillsManager("./skills")
# Save a skill
skill = SimpleSkill(
name="backup_to_cloud",
description="Backup files to cloud storage",
code='''
async def backup_to_cloud(source: str, bucket: str) -> dict:
files = await bash__ls({"path": source})
uploaded = 0
for f in files:
await cloud__upload({"file": f, "bucket": bucket})
uploaded += 1
return {"uploaded": uploaded}
''',
tools_used=["bash__ls", "cloud__upload"],
tags=["backup", "cloud"],
)
manager.save_skill(skill)
# Load a skill later
loaded = manager.load_skill("backup_to_cloud")
print(loaded.description) # "Backup files to cloud storage"
print(loaded.code)
SimpleSkill Attributes
| Attribute | Type | Description |
|---|---|---|
name | str | Unique skill identifier |
description | str | Human-readable description |
code | str | Python code implementing the skill |
tools_used | list[str] | List of tool names used |
tags | list[str] | Optional categorization tags |
parameters | dict | Optional JSON schema for parameters |
created_at | float | Unix timestamp when created |
updated_at | float | Unix timestamp when last updated |
SkillDirectory Pattern
For file-based skill organization with automatic discovery:
from agent_skills import SkillDirectory, setup_skills_directory
# Initialize skills directory
skills = setup_skills_directory("./workspace/skills")
# List available skills
for skill in skills.list():
print(f"{skill.name}: {skill.description}")
# Get a specific skill
skill = skills.get("batch_process")
# Search for relevant skills
matches = skills.search("data processing")
# Create a new skill programmatically
skills.create(
name="my_skill",
code='async def my_skill(x: str) -> str: return x.upper()',
description="Transform text to uppercase",
)
Composing Skills
Skills can import and use other skills to build higher-level operations:
# skills/analyze_and_report.py
"""Analyze data and generate a report."""
async def analyze_and_report(data_dir: str) -> dict:
from skills.batch_process import batch_process
from skills.generate_report import generate_report
# First process the files
process_result = await batch_process(data_dir, f"{data_dir}/processed")
# Then generate a report
report = await generate_report(f"{data_dir}/processed")
return {"processed": process_result["processed"], "report": report}
MCP Server Integration
When running Agent Codemode as an MCP server, skills are exposed through dedicated tools:
save_skill: Save a new skill or update an existing onerun_skill: Execute a saved skill by name
from agent_codemode import codemode_server, configure_server
from agent_codemode import CodeModeConfig
config = CodeModeConfig(
skills_path="./skills",
# ... other config
)
configure_server(config=config)
codemode_server.run()
Pydantic AI Integration
For Pydantic AI agents, use the AgentSkillsToolset:
from pydantic_ai import Agent
from agent_skills import AgentSkillsToolset, SandboxExecutor
from code_sandboxes import LocalEvalSandbox
# Create toolset with sandbox execution
sandbox = LocalEvalSandbox()
toolset = AgentSkillsToolset(
directories=["./skills"],
executor=SandboxExecutor(sandbox),
)
# Use with pydantic-ai agent
agent = Agent(
model='openai:gpt-4o',
toolsets=[toolset],
)
# Agent gets: list_skills, load_skill, read_skill_resource, run_skill_script
Helper Utilities
The agent-skills package provides helper utilities for skill composition:
from agent_skills import wait_for, retry, run_with_timeout, parallel, RateLimiter
# Retry an operation on failure
result = await retry(my_async_function, max_attempts=3, delay=1.0)
# Run with timeout
result = await run_with_timeout(slow_operation(), timeout=30.0)
# Run multiple operations in parallel
results = await parallel([task1(), task2(), task3()])
# Rate limiting
limiter = RateLimiter(max_calls=10, period=60) # 10 calls per minute
async with limiter:
await api_call()
Best Practices
- Clear naming: Use descriptive names that indicate what the skill does
- Documentation: Include docstrings with Args, Returns, and Examples
- Single responsibility: Each skill should do one thing well
- Error handling: Include try/except for robust execution
- Composability: Design skills to be easily combined with others
- Testing: Test skills independently before composing them
- Codemode integration: When both are enabled, prefer using skill bindings via
generated.skillsfor consistency with MCP tool patterns
Generated Skill Bindings (Approach D)
When Codemode and Skills are both enabled in agent-runtimes, skills are automatically wired into Codemode through a process called Approach D: Skills as Generated Bindings.
How It Works
-
Binding generation:
generate_skill_bindings()incodegen.pyproduces async Python wrapper functions for each skill tool (list_skills,load_skill,read_skill_resource,run_skill). -
Executor routing: When code in the sandbox calls
await list_skills({}), the call is routed through the executor'scall_tool("skills__list_skills", args), which delegates to a skill tool caller callback connected to the realAgentSkillsToolset. -
Post-init callback: Since Codemode lazily initializes its executor, skill wiring uses a post-init callback (
add_post_init_callback) to defer binding generation and caller registration until the executor is ready.
Call Flow
Sandbox Code Executor AgentSkillsToolset
│ │ │
│ from generated.mcp. │ │
│ skills import run_skill │ │
│ │ │
│ await run_skill({...}) │ │
│ ─────────────────────────▶ │ │
│ │ call_tool("skills__run_skill") │
│ │ ─────────────────────────────▶ │
│ │ │ execute skill
│ │ ◀──────────────────────── │
│ ◀──────────────────── │ │
│ result │ │
Remote Sandbox Support
When using a remote sandbox (e.g., Datalayer Runtime or Jupyter kernel), skill bindings are generated inline inside _generate_tools_in_sandbox() alongside MCP tool bindings. The sandbox calls back to agent-runtimes via HTTP, where a skills proxy routes the request to the real AgentSkillsToolset:
Remote Sandbox agent-runtimes AgentSkillsToolset
│ │ │
│ await run_skill({...}) │ │
│ ──── HTTP POST ────────▶ │ │
│ /mcp-proxy/skills/ │ │
│ run_skill │ skills_proxy_caller(...) │
│ │ ─────────────────────────────▶ │
│ │ │ execute skill
│ │ ◀──────────────────────── │
│ ◀──────────────────── │ │
│ result │ │
Post-Init Callbacks
The CodemodeToolset supports deferred initialization callbacks via add_post_init_callback(). This is used by agent-runtimes to wire skills into Codemode at the right time:
from agent_codemode import CodemodeToolset
toolset = CodemodeToolset(registry)
# Register a callback that fires once after lazy initialization
toolset.add_post_init_callback(lambda ts: wire_skills(ts))
# Callbacks fire automatically when the executor is first initialized
# (e.g., on the first call to execute_code)