How to Create Your First Agent Skill
A step-by-step guide to writing an agent skill from scratch: directory structure, SKILL.md format, effective descriptions, common patterns, and a complete working example.
Every time you explain your commit message format, your deployment steps, or your component structure to an AI agent, you’re repeating yourself. Agent skills let you write those instructions once in a markdown file, and any compatible agent will load them automatically when the task matches. The SKILL.md format works across Cursor, Claude Code, Codex, and many other tools, with no build step or SDK required.
Folder Structure
A skill is a folder containing a SKILL.md file. The folder name should be lowercase with hyphens and no spaces:
commit-messages/
SKILL.md
For more complex skills, you can include reference docs, scripts, and static assets alongside the SKILL.md. The agent reads SKILL.md first and only loads supporting files when the instructions explicitly reference them:
deploy-checklist/
SKILL.md
references/
REFERENCE.md
scripts/
validate.sh
assets/
config-template.json
Where Each Tool Stores Skills
The SKILL.md format is the same everywhere, so only the directory path changes:
| Tool | Project-level | User-level (all projects) |
|---|---|---|
| Cursor | .cursor/skills/ | ~/.cursor/skills/ |
| Claude Code | .claude/skills/ | ~/.claude/skills/ |
| Codex | .agents/skills/ | ~/.agents/skills/ |
Cursor also cross-reads .claude/skills/, .codex/skills/, and .agents/skills/, so skills written for other tools work in Cursor automatically.
The SKILL.md File
Every SKILL.md starts with YAML frontmatter followed by a markdown body with the actual instructions.
Required Frontmatter
The frontmatter needs two fields: name (lowercase letters, numbers, and hyphens, max 64 characters, matching the folder name) and description (what the skill does and when to use it, max 1024 characters):
---
name: commit-messages
description: Generate commit messages following the project's conventional commits format. Use when the user asks for help with commit messages or is committing code.
---
The description is the most important field because it’s what the agent reads to decide whether this skill is relevant to the current task. A vague description means the skill never gets loaded, while a specific one triggers reliably.
There are also optional fields like license, compatibility, allowed-tools (pre-approved tools the agent can use without asking), disable-model-invocation (prevents automatic loading so the skill only activates via explicit /skill-name command), and metadata for arbitrary key-value pairs. Most skills only need name and description.
Writing Good Descriptions
The best descriptions state what the skill does and when the agent should use it:
# Bad: vague, no trigger terms
description: Helps with git stuff
# Good: specific capabilities, clear triggers
description: Generate commit messages using conventional commits format (feat, fix, chore, refactor). Use when the user asks to commit, write a commit message, or review staged changes.
Write the description in third person because it gets injected into the agent’s system prompt, where “I can help you” would read strangely.
The Instruction Body
The markdown body after the frontmatter is where the actual skill lives. You’re not teaching the agent general knowledge; it already knows how git works, how React components are structured, and how deployments typically run. You’re giving it the specifics it doesn’t have: your team’s conventions, your project’s preferences, and your particular workflows.
# Commit Messages
## Format
Use conventional commits:
type(scope): description
Optional body explaining why, not what.
## Types
- `feat`: new feature
- `fix`: bug fix
- `refactor`: code change that doesn't fix a bug or add a feature
- `chore`: maintenance tasks
- `docs`: documentation only
## Rules
- Keep the subject line under 72 characters
- Use imperative mood: "add feature" not "added feature"
- Don't end the subject with a period
- Include scope when the change is limited to a specific module
How Skills Load
Skills don’t dump everything into the context window at once. They use a three-phase model:
- Discovery (~100 tokens) - The agent reads only the name and description to decide relevance.
- Activation (<5000 tokens) - If the task matches, the full SKILL.md is loaded into context.
- Execution - The agent follows the instructions and loads referenced files only as needed.
This is why you should keep SKILL.md under 500 lines and move extensive reference material into separate files that the agent pulls in on demand:
## API Conventions
Follow the patterns in [API_STANDARDS.md](references/API_STANDARDS.md) for endpoint naming,
error responses, and pagination.
Keep references one level deep and avoid chaining references to other references.
Instruction Patterns
These are recurring structures that work well inside the markdown body.
Templates give the agent an output format to follow:
## PR Description Template
```markdown
## Summary
[1-3 bullet points describing the change]
## Test Plan
[How to verify this works]
## Breaking Changes
[List any breaking changes, or "None"]
```
Workflows break multi-step processes into sequences:
## Deployment Process
1. Run the test suite: `npm test`
2. Build the production bundle: `npm run build`
3. Run the staging smoke tests: `npm run test:staging`
4. Deploy to production: `npm run deploy`
5. Verify the health check: `curl https://api.example.com/health`
Conditionals guide the agent through decision points:
## Database Changes
**Adding a new column?**
-> Create a migration file in `db/migrations/`
-> Column must have a default value
-> Run `npm run db:migrate` to apply
**Modifying an existing column?**
-> Never rename directly; create a new column, migrate data, drop the old one
-> This requires two separate deployments
Feedback loops build validation into quality-critical tasks:
## After generating tests
1. Run the tests: `npm test`
2. If any test fails, fix the test (not the source code)
3. Run coverage: `npm run test:coverage`
4. If coverage dropped, add missing test cases
5. Only proceed when all tests pass and coverage is stable
Invoking Skills
Once the file is in place, the agent can pick it up in three ways.
Automatic discovery is the default. Start a new chat and work on a relevant task, and the agent matches the task against skill descriptions and loads the right one without any explicit invocation.
Explicit invocation lets you trigger a skill on demand by typing /skill-name in Cursor or Claude Code, or $skill-name in Codex. In Cursor you can also type @skill-name to attach the skill as context without invoking it.
Quick creation scaffolds a new skill for you. Type /create-skill in Cursor or $skill-creator in Codex, describe what you want, and the agent generates the folder structure and SKILL.md.
Full Example
Here’s a skill for scaffolding React components in a project that uses a specific file structure. It lives in .cursor/skills/react-components/SKILL.md:
---
name: react-components
description: Scaffold React components following project conventions. Use when creating new components, refactoring existing ones, or when the user asks about component structure.
---
# React Components
## File Structure
Every component gets its own directory:
```
src/components/
Button/
Button.tsx # Component
Button.test.tsx # Tests
index.ts # Re-export
```
## Component Template
```tsx
interface ButtonProps {
children: React.ReactNode;
variant?: 'primary' | 'secondary';
onClick?: () => void;
}
export function Button({ children, variant = 'primary', onClick }: ButtonProps) {
return (
<button className={styles[variant]} onClick={onClick}>
{children}
</button>
);
}
```
## Conventions
- Named exports only, no default exports
- Props interface named `ComponentNameProps`
- Functional components only
- Co-locate tests with the component
- Re-export from index.ts for clean imports
Vetting Community Skills
Community skill registries are growing, and you should treat installing a skill the same way you’d treat installing an npm package from an unknown publisher. Before adding one, read the SKILL.md in full, inspect any scripts in the scripts/ directory, and check whether the skill requests tools, network access, or file system writes that don’t match its stated purpose. Pay attention to the allowed-tools field; a “commit-messages” skill shouldn’t need Bash(rm:*) or network access.
The safest skills are the ones you write yourself. Start there, and expand to community skills once you’re comfortable reading and auditing them.
Get Insanely Good at AI
The book for developers who want to understand how AI actually works. LLMs, prompt engineering, RAG, AI agents, and production systems.
Keep Reading
Codex Can Now Use Your Mac While You Work
OpenAI's Codex desktop app adds background computer use on macOS, an in-app browser, image generation via gpt-image-1.5, and 90+ new plugins.
Agent Skills vs Cursor Rules: When to Use Each
Cursor has both rules and skills for customizing the AI agent. They overlap, but they're not the same. Here's when to use each and how they interact.
What Are Agent Skills and Why They Matter
Agent skills are portable packages of instructions that extend AI coding agents. Here's what they are, how they work, and why the open standard changes how developers work with AI tools.
Best AI Coding Assistants Compared (2026): Cursor vs Copilot vs Windsurf
A practical comparison of Cursor, GitHub Copilot, and Windsurf. Features, pricing, strengths, weaknesses, and which one fits your workflow in 2026.
How to Integrate Claude Code into Large Legacy Codebases
Learn how to integrate Claude Code into massive legacy projects using incremental context and the new native binary features in version 2.1.119.