Skip to main content

🧰 Skills

Skills are one of the two sources Agent Codemode uses to generate programmatic tools (the other being MCP Tools).

Skills are reusable code patterns that compose MCP tools to accomplish specific tasks. They allow agents to build up a toolbox of higher-level operations. Agent Codemode integrates with the agent-skills package for comprehensive skill management.

Skills APIs live in agent-skills. Import skill managers and helpers from agent_skills.

How It Works

┌─────────────────┐     ┌─────────────────┐     ┌─────────────────────────────┐
│ Skill Files │ ──▶ │ Skills Manager │ ──▶ │ Callable Python Functions │
│ (skills/*.py) │ │ (discovers │ │ from skills.batch_process │
│ or saved via │ │ and loads) │ │ import batch_process │
│ manager │ │ │ │ │
└─────────────────┘ └─────────────────┘ └─────────────────────────────┘
  1. Create - Write skill files in skills/ directory or save via agent_skills.SimpleSkillsManager
  2. Discover - Agent Codemode discovers all available skills
  3. Load - Skills are loaded as callable Python functions
  4. Use - Import and call skills like any Python module

Overview

Skills enable AI agents to:

  • Save useful patterns: Store code that works well for reuse
  • Build a toolbox: Accumulate capabilities over time
  • Compose operations: Chain multiple tools into higher-level functions
  • Share knowledge: Export skills for use by other agents

Skills as Code Files

The primary pattern is organizing skills as Python files in a skills/ directory:

# skills/batch_process.py
"""Process all files in a directory."""

async def batch_process(input_dir: str, output_dir: str) -> dict:
"""Process all files in a directory.

Args:
input_dir: Input directory path.
output_dir: Output directory path.

Returns:
Processing statistics.
"""
from generated.servers.filesystem import list_directory, read_file, write_file

entries = await list_directory({"path": input_dir})
processed = 0

for entry in entries.get("entries", []):
content = await read_file({"path": f"{input_dir}/{entry}"})
# Process content...
await write_file({"path": f"{output_dir}/{entry}", "content": content.upper()})
processed += 1

return {"processed": processed}

Skills as Standalone CLI Tools

Skills can also be designed as standalone command-line tools that can be run independently:

#!/usr/bin/env python
# skills/analyze_file.py
"""
Standalone CLI tool that can be run with: python skills/analyze_file.py <file-path>
This demonstrates a skill that can be invoked directly from the command line.
"""
import asyncio
import sys
import json

async def analyze_file(file_path: str) -> dict:
"""Analyze a file and return statistics."""
from generated.servers.filesystem import read_file

content = await read_file({"path": file_path})
lines = content.split('\n')
word_count = len(content.split())
char_count = len(content)

return {
"lines": len(lines),
"words": word_count,
"chars": char_count,
"blank_lines": sum(1 for line in lines if line.strip() == '')
}

# Run if called directly
if __name__ == "__main__":
if len(sys.argv) < 2:
print("Usage: python analyze_file.py <file-path>")
sys.exit(1)

file_path = sys.argv[1]
result = asyncio.run(analyze_file(file_path))
print(json.dumps(result, indent=2))

Run it directly:

python skills/analyze_file.py /data/myfile.txt
# Output: {"lines": 42, "words": 256, "chars": 1024, "blank_lines": 5}

Or import and use it in executed code:

from skills.analyze_file import analyze_file
stats = await analyze_file("/data/myfile.txt")
print(f"File has {stats['lines']} lines")

Helper Utilities as Skills

Skills can also provide reusable helper utilities for other skills:

# skills/helpers.py
"""Helper utilities for skill composition."""

async def wait_for(
condition,
interval_seconds: float = 1.0,
timeout_seconds: float = float('inf')
) -> None:
"""Wait for an async condition to become true."""
import time
start_time = time.time()

while True:
result = await condition()
if result:
return

if time.time() - start_time > timeout_seconds:
raise TimeoutError(f"Timeout waiting for condition after {timeout_seconds}s")

await asyncio.sleep(interval_seconds)


async def retry(fn, max_attempts: int = 3, delay_seconds: float = 1.0):
"""Retry a function until it succeeds or max attempts reached."""
last_error = None

for attempt in range(1, max_attempts + 1):
try:
return await fn()
except Exception as e:
last_error = e
if attempt < max_attempts:
await asyncio.sleep(delay_seconds)

raise RuntimeError(f"Failed after {max_attempts} attempts: {last_error}")

Use these helpers in other skills:

from skills.helpers import retry, wait_for
from generated.servers.web import fetch_url

# Retry a flaky API call
data = await retry(lambda: fetch_url({"url": "https://api.example.com/data"}))

# Wait for a condition
await wait_for(lambda: check_file_exists("/output/result.json"))

Using Skills in Executed Code

Skills are imported and called like any Python module within execute_code:

# In executed code
from skills.batch_process import batch_process

result = await batch_process("/data/input", "/data/output")
print(f"Processed {result['processed']} files")

SimpleSkillsManager

For programmatic skill management, use the SimpleSkillsManager from agent-skills:

from agent_skills import SimpleSkillsManager, SimpleSkill

# Create a skills manager
manager = SimpleSkillsManager("./skills")

# Save a skill
skill = SimpleSkill(
name="backup_to_cloud",
description="Backup files to cloud storage",
code='''
async def backup_to_cloud(source: str, bucket: str) -> dict:
files = await bash__ls({"path": source})
uploaded = 0
for f in files:
await cloud__upload({"file": f, "bucket": bucket})
uploaded += 1
return {"uploaded": uploaded}
''',
tools_used=["bash__ls", "cloud__upload"],
tags=["backup", "cloud"],
)
manager.save_skill(skill)

# Load a skill later
loaded = manager.load_skill("backup_to_cloud")
print(loaded.description) # "Backup files to cloud storage"
print(loaded.code)

SimpleSkill Attributes

AttributeTypeDescription
namestrUnique skill identifier
descriptionstrHuman-readable description
codestrPython code implementing the skill
tools_usedlist[str]List of tool names used
tagslist[str]Optional categorization tags
parametersdictOptional JSON schema for parameters
created_atfloatUnix timestamp when created
updated_atfloatUnix timestamp when last updated

SkillDirectory Pattern

For file-based skill organization with automatic discovery:

from agent_skills import SkillDirectory, setup_skills_directory

# Initialize skills directory
skills = setup_skills_directory("./workspace/skills")

# List available skills
for skill in skills.list():
print(f"{skill.name}: {skill.description}")

# Get a specific skill
skill = skills.get("batch_process")

# Search for relevant skills
matches = skills.search("data processing")

# Create a new skill programmatically
skills.create(
name="my_skill",
code='async def my_skill(x: str) -> str: return x.upper()',
description="Transform text to uppercase",
)

Composing Skills

Skills can import and use other skills to build higher-level operations:

# skills/analyze_and_report.py
"""Analyze data and generate a report."""

async def analyze_and_report(data_dir: str) -> dict:
from skills.batch_process import batch_process
from skills.generate_report import generate_report

# First process the files
process_result = await batch_process(data_dir, f"{data_dir}/processed")

# Then generate a report
report = await generate_report(f"{data_dir}/processed")

return {"processed": process_result["processed"], "report": report}

MCP Server Integration

When running Agent Codemode as an MCP server, skills are exposed through dedicated tools:

  • save_skill: Save a new skill or update an existing one
  • run_skill: Execute a saved skill by name
from agent_codemode import codemode_server, configure_server
from agent_codemode import CodeModeConfig

config = CodeModeConfig(
skills_path="./skills",
# ... other config
)

configure_server(config=config)
codemode_server.run()

Pydantic AI Integration

For Pydantic AI agents, use the DatalayerSkillsToolset:

from pydantic_ai import Agent
from agent_skills import DatalayerSkillsToolset, SandboxExecutor
from code_sandboxes import LocalEvalSandbox

# Create toolset with sandbox execution
sandbox = LocalEvalSandbox()
toolset = DatalayerSkillsToolset(
directories=["./skills"],
executor=SandboxExecutor(sandbox),
)

# Use with pydantic-ai agent
agent = Agent(
model='openai:gpt-4o',
toolsets=[toolset],
)

# Agent gets: list_skills, load_skill, read_skill_resource, run_skill_script

Helper Utilities

The agent-skills package provides helper utilities for skill composition:

from agent_skills import wait_for, retry, run_with_timeout, parallel, RateLimiter

# Retry an operation on failure
result = await retry(my_async_function, max_attempts=3, delay=1.0)

# Run with timeout
result = await run_with_timeout(slow_operation(), timeout=30.0)

# Run multiple operations in parallel
results = await parallel([task1(), task2(), task3()])

# Rate limiting
limiter = RateLimiter(max_calls=10, period=60) # 10 calls per minute
async with limiter:
await api_call()

Best Practices

  1. Clear naming: Use descriptive names that indicate what the skill does
  2. Documentation: Include docstrings with Args, Returns, and Examples
  3. Single responsibility: Each skill should do one thing well
  4. Error handling: Include try/except for robust execution
  5. Composability: Design skills to be easily combined with others
  6. Testing: Test skills independently before composing them

See Also