Skill Documentation: Writing SKILL.md Files
Learn the best practices for writing SKILL.md files that clearly communicate what your skill does, when to use it, and how to get the best results.
What You Will Get
After this guide, you will be able to write SKILL.md files that make your skills easy to understand and use. Good documentation is the difference between a skill that gets installed and used successfully, and one that gets installed, tried once, and abandoned.
The SKILL.md file is the primary interface between your skill and the agent. The agent reads it to understand when to activate the skill, what steps to follow, and how to handle edge cases. Clear, well-structured documentation leads to reliable skill behavior. Vague or incomplete documentation leads to unpredictable results.
This guide covers the essential sections, writing style, common mistakes to avoid, and examples from popular skills. Whether you are documenting a simple skill or a complex multi-step workflow, the principles are the same: be specific, be actionable, and be thorough.
How to Write a SKILL.md
Structure and content for effective documentation
Write the Skill Summary
Start with a one-paragraph summary that answers: what does this skill do, and when should the agent use it. Be specific about activation triggers. Instead of helps with GitHub, write monitors GitHub repositories for new pull requests and posts automated code review comments with inline suggestions. The agent uses this summary to decide when to activate the skill.
Define When to Use This Skill
Add an explicit list of activation triggers. Use phrases like: Use when the user asks about PR reviews, Use when a pull_request webhook event is received, Use when the user mentions code review or review my code. These triggers tell the agent exactly when this skill is relevant, reducing false activations and missed activations.
Write Step-by-Step Instructions
Provide detailed instructions the agent follows when the skill is activated. Write them as clear, numbered steps. Each step should describe one action with enough detail that the agent knows exactly what to do. Include decision points: If the PR has more than 500 changed lines, summarize by file instead of reviewing line by line.
Include Input and Output Specifications
Document what inputs the skill expects and what outputs it produces. Specify the format, required fields, and optional fields. For example: Input: GitHub PR URL or PR number. Output: Structured review comment posted on the PR with severity levels (blocker, warning, suggestion). Clear specs prevent format mismatches.
Add Configuration Documentation
If the skill has configurable parameters, document each one. Include the parameter name, description, type (string, number, boolean), default value, and valid range. Mark which parameters are required versus optional. Provide example configurations for common use cases.
Write Error Handling Instructions
Tell the agent what to do when things go wrong. Document common failure scenarios and the appropriate response for each. For example: If the GitHub API returns a 403 error, inform the user that the token may lack sufficient permissions and suggest checking the connection settings. Without error handling instructions, the agent may fail silently or produce confusing error messages.
Add Examples and Edge Cases
Include at least three example interactions showing the skill in use. Show the user input, the expected agent behavior, and the output. Also document edge cases: what happens with empty input, very large input, or unusual formats. Examples are the most-read part of skill documentation.
Tips and Best Practices
Be Specific, Not Vague
Replace vague instructions with specific ones. Instead of process the data appropriately, write parse the JSON response, extract the title and body fields, and format them as a markdown summary with the title as an H2 heading. Specificity produces reliable behavior.
Use Conditional Logic Explicitly
When the agent needs to make decisions, spell out the conditions. If the file is a test file (ends in .test.ts or .spec.ts), skip the review and note it was skipped. Do not assume the agent will infer conditions from context.
Test Your Documentation
After writing the SKILL.md, test it by using the skill and checking whether the agent follows the instructions correctly. Have someone unfamiliar with the skill read the documentation and provide feedback. Clear documentation should be understandable without additional explanation.
Keep It Updated
When you change the skill's behavior, update the documentation immediately. Stale documentation causes the agent to follow outdated instructions, which produces incorrect results. Include a last updated date so users know the documentation is current.
Frequently Asked Questions
Related Pages
Ready to get started?
Deploy your own OpenClaw instance in under 60 seconds. No VPS, no Docker, no SSH. Just your personal AI assistant, ready to work.
Starting at $24.50/mo. Everything included. 3-day money-back guarantee.