
Headless CMS Explained: Benefits, Drawbacks & When to Use

Sergey Kaplich

Claude Code Skills let teams turn repeatable workflows—code review, onboarding, deployments, scaffolding—into reusable SKILL.md playbooks that Claude can run via slash commands or load automatically based on task context. They reduce tribal knowledge, enforce consistency, and speed up onboarding by pairing skill instructions with stable repo context in CLAUDE.md. Start with one high-friction task, commit the skill to your repo, and iterate on it like code.
Every team has that developer. The one who knows the exact deploy sequence. Who remembers the fourteen-step PR review checklist. Who set up the project scaffolding three years ago and is the only person who can reliably reproduce it.
Now imagine that developer leaves. Or goes on vacation. Or just has a bad Monday.
The knowledge walks out the door. The rest of the team improvises. Inconsistencies creep in. Reviews miss things. New hires take weeks to get productive because the "real" workflow lives in someone's head, not in any document anyone actually reads.
Claude Code Skills fix this: they turn your team's workflows into reusable, executable instructions that run the same way every time.
Claude overview describes Anthropic's AI coding assistant as a CLI tool and VS Code extension that works directly in your codebase. The workflow docs explain how it reads files, runs commands, creates commits, and executes multi-step tasks.
Skills are the reusable part. Each skill is a directory containing a SKILL.md file: markdown with YAML frontmatter that teaches Claude how to complete a specific, repeatable task (slash commands). Think of them as playbooks: structured instructions that encode your team's best practices into something Claude can execute consistently.
Anthropic uses a couple terms here (nomenclature):
SKILL.md file inside a skill directory.They overlap in practice, but when you're building them, you're writing SKILL.md files (term definitions).
The problems this addresses are concrete:
Skills are distinct from CLAUDE.md files (persistent project configuration), Custom Instructions (personal preferences), and MCP connections (external service integrations). As Anthropic puts it: "MCP connections give Claude access to tools, while Skills teach Claude how to use those tools effectively."
The core idea behind Skills is progressive disclosure, a three-level loading architecture that keeps Claude's context window from filling up with instructions it doesn't need yet.
| Level | What Loads | When |
|---|---|---|
| Level 1 (Discovery) | YAML frontmatter: minimal metadata | Always, at startup |
| Level 2 (Context) | The SKILL.md body: detailed instructions | When Claude determines the skill is relevant |
| Level 3 (Implementation) | Linked files: templates, examples, scripts | Only during active execution |
Multi-tool setups can have serious token overhead before a conversation even starts. Progressive disclosure keeps your library from consuming the whole budget up front. Claude loads metadata first, then pulls in details only when it needs to execute.
Two invocation modes:
/skill-name [arguments] as a slash commanddescription field from your YAML frontmatter and loads the skill when it recognizes a matching task contextAutomatic loading lives or dies on the description. Make it specific and Claude reaches for the right playbook; make it vague and you will be typing slash commands.
The SKILL.md needs YAML frontmatter with, at minimum, a name (which becomes the slash command) and a description (which triggers automatic loading) (frontmatter rules). The markdown body is natural language. Write instructions the way you would explain a task to a smart colleague.
One critical rule: all supporting files must be explicitly referenced in SKILL.md so Claude knows they exist (file references). Orphaned files in the directory are invisible.
Skills live at four scope levels with clear precedence (skill scopes):
| Scope | Path | Applies To |
|---|---|---|
| Enterprise | Managed settings (IT-deployed) | All users in org |
| Personal | ~/.claude/skills/<skill-name>/SKILL.md | All your projects |
| Project | .claude/skills/<skill-name>/SKILL.md | This repo only |
| Plugin | <plugin>/skills/<skill-name>/SKILL.md | Where the plugin is enabled |
Enterprise overrides personal; personal overrides project (precedence rules). Plugin skills use a plugin-name:skill-name namespace so they cannot collide (plugin namespace).
For teams, commit skills to your repository (commit skills). Everyone who clones gets the same skills automatically.
Skills are not rigid templates. The markdown body supports natural language instructions that Claude interprets contextually, so the same skill can adapt to different inputs without you hardcoding every variation.
Most flexibility comes from composing skills with the broader Claude Code environment:
.mcp.json with placeholders like ${API_KEY}, and each person sets their own credentials (MCP env vars)@file imports in CLAUDE.md: break large instruction sets into maintainable chunks (memory docs)$CLAUDE_HOOK_INPUT (hooks guide)A reliable pattern is to keep each skill focused on one task, use CLAUDE.md for shared context, and let Claude compose skills when workflows get bigger.
Claude Code has two memory systems. Skills work best when you design with both in mind:
CLAUDE.md (you write): loaded at session start. You can have a user-level file (~/.claude/CLAUDE.md) and a project-level file (<project-root>/CLAUDE.md) (memory docs).~/.claude/projects/<encoded-git-root>/memory/. Only the first 200 lines of MEMORY.md load at startup; topic files load on demand (memory limits)./compact before you hit limits, and start fresh sessions for distinct tasks (compaction details).Each project maintains a separate memory space, so repos do not bleed into each other (project memory).
Keep CLAUDE.md tight, and use @imports for subsystem details.
Skills do not exist in isolation. They connect to your development infrastructure through three common patterns.
Git + VS Code: The VS Code extension provides a native interface: review and edit plans, auto-accept edits, and @-mention files with line ranges. Git operations (commits, PRs, worktrees) work directly from the IDE.
GitHub Actions (GA v1.0): The official GitHub Action triggers on @claude mentions in PRs or issues:
name: Claude Code PR Review
on:
pull_request:
types: [opened, synchronize, reopened]
jobs:
claude_review:
runs-on: ubuntu-latest
permissions:
contents: read
pull-requests: write
issues: write
steps:
- name: Checkout
uses: actions/checkout@v4
- name: Claude Code review
uses: anthropics/claude-code-action@v1
with:
anthropic_api_key: ${{ secrets.ANTHROPIC_API_KEY }}
prompt: "/review"
claude_args: "--max-turns 5"Lifecycle hooks: Shell commands, HTTP endpoints, or LLM prompts that fire at hook events: SessionStart, PreToolUse, PostToolUse, SubagentStart/SubagentStop, SessionEnd, and more. Configure them at project scope for team-wide standardization.
Note: GitLab CI/CD integration exists but is in beta, maintained by GitLab rather than Anthropic. Do not treat it as mission-critical without a fallback.
Prerequisites: Claude Code CLI installed (see the install docs), a project repository, and an eligible plan (see plan pricing).
Step 1: Initialize your project context
Run /init inside Claude Code to generate a base CLAUDE.md (init command). Then customize it:
# CLAUDE.md
## Project Overview
RunCover is a full-stack web application that generates video summaries of GPX activities.
## Tech Stack
- Backend: .NET 10.0, ASP.NET Core Web API, PostgreSQL
- Frontend: React 19, TypeScript 5.9, Tailwind CSS
## Common Commands
- dotnet build src/RunCover.API --configuration Release
- dotnet test tests/RunCover.Domain.Tests
- dotnet run --project src/RunCover.API(Example adapted from a production .NET/React project.)
Step 2: Create your first skill directory
mkdir -p .claude/skills/code-reviewStep 3: Write the SKILL.md
---
name: code-review
description: Reviews code changes for bugs, security issues, and style violations
---
# Code Review
Review the current diff against main branch. Check for:
1. Security vulnerabilities (SQL injection, XSS, auth bypasses)
2. Performance issues (N+1 queries, unnecessary re-renders)
3. Style violations per our CLAUDE.md coding standards
4. Missing or inadequate test coverage
Format output as:
- **Critical:** must fix before merge
- **Warning:** should fix, can defer with justification
- **Suggestion:** nice to have
If no issues found, say so clearly. Don't invent problems.Step 4: Commit and use
git add .claude/skills/code-review/
git commit -m "Add code review skill"Now type /code-review in Claude Code. Or just ask Claude to review your changes. If the description is sharp, it will load the skill automatically.
Every team member who pulls this commit gets the same review process.
The shift from single skills to orchestrated workflows is where the real power lives (advanced tool use). Three patterns from Anthropic:
| Pattern | When to Use |
|---|---|
| Chaining | Sequential skills where one's output feeds the next |
| Orchestration | A coordinator agent directs multiple sub-skills |
| Dynamic invocation | Skills selected at runtime based on context |
The Sentry bug-fix workflow in the Complete Guide shows this in practice: MCP fetches the Sentry issue, an analysis skill identifies root cause and proposes a fix, and a code-fix skill updates the pull request.
One of the most immediately useful patterns is auto-formatting after every Claude edit. In .claude/settings.json:
{
"hooks": {
"PostToolUse": [
{
"matcher": "Edit|Write",
"hooks": [
{
"type": "command",
"command": "prettier --write $(jq -r '.files[0].path' < $CLAUDE_HOOK_INPUT)"
}
]
}
]
}
}The matcher field accepts regex patterns. The hooks guide shows that $CLAUDE_HOOK_INPUT provides JSON context about the triggering event.
For skills that call external services, build retry logic with exponential backoff and jitter to prevent thundering herd problems on recovery:
@app.task(bind=True, max_retries=4, default_retry_delay=10)
def call_external_api(self):
try:
result = make_api_call()
except ExternalAPIException as exc:
countdown = int((random.uniform(1, 1.3) ** self.request.retries) * self.default_retry_delay)
raise self.retry(exc=exc, countdown=countdown)And use fully qualified tool names in skills to prevent resolution errors as your library scales (tool naming).
What goes in CLAUDE.md, and what does not, matters more at scale.
Include: special build commands, code style rules, testing instructions, common pitfalls.
Exclude: redundant conventions, detailed API docs (link instead), frequently changing information, long tutorials.
For monorepos: root CLAUDE.md for shared tooling, plus product-specific markdown linked via @imports.
PR automation from the terminal: a one-liner that generates PR descriptions:
git diff main...HEAD | claude -p "Write a PR description for these changes. Include summary, testing, screenshots, breaking changes. Format as GitHub markdown."Team onboarding via CLAUDE.md: structure your project config as dual-purpose documentation that works for both humans and AI. New hires can use Claude to navigate the codebase, execute tasks, and ramp up without extensive manual onboarding.
McKinsey benchmarks for AI-assisted development vary widely by team and workflow. The more reliable predictor of durable gains is whether you treat CLAUDE.md and skills as a feedback loop: when Claude gets something wrong, you fix the instruction file in a PR so the next run improves.
Every major AI coding tool has converged on the same basic pattern: git-committed, markdown-based instruction files.
The differences are in the details.
| Capability | Claude Code | GitHub Copilot | Bash/Python |
|---|---|---|---|
| Config format | CLAUDE.md + SKILL.md | copilot-instructions.md | bash/Python files |
| Team sharing | Git-committed | Git-committed | Git-committed |
| Scoping | Personal/Project/Enterprise | Repo rules | Explicit code logic |
| Determinism | Non-deterministic | Non-deterministic | ✅ Deterministic |
| Token costs | Yes | Yes | None |
A simple way to choose:
Whichever tool you use, treat instruction files like code: review them, iterate on them, and let them improve with the team.
Week 1: Foundation. Set up project-level CLAUDE.md with architecture, build commands, and coding standards. Commit .claude/ configuration to git. Start with one skill for your highest-friction repeated task.
Weeks 2 to 4: Expand. Add skills for code review, PR descriptions, and project scaffolding. Use numbered folder prefixes (ln-001-, ln-010-) for predictable ordering as the library grows.
Weeks 5 to 6: Stabilize. Iterate on skills based on team feedback. Treat corrections as PRs to the skill files.
There is no separate "skill marketplace." The sharing mechanism is git, and that's a feature.
Commit skills. Review them in PRs. Branch them for experiments. Merge what works.
For MCP configurations with credentials, use environment variable expansion (${API_KEY} placeholders) so config is shared but secrets stay local (MCP env vars).
SKILL.md is case-sensitive. Folder names must be kebab-case: no spaces, underscores, or capitals (naming rules).~/.claude/projects/<encoded-git-root>/memory/ with no configuration option. Issue #28276 tracks a feature request.The default security model is solid. It is read-only model; write operations require explicit approval; bash execution runs in sandboxed environments using OS-level primitives. Sandboxing can mean 84% fewer prompts, which helps with approval fatigue.
The gap is that Anthropic does not audit third-party MCP servers (MCP audits). Your team must vet them or build your own. Commit MCP allowlists to source control for peer review.
Expect an adjustment period as developers adapt their mental models. The shift from writing code to supervising AI-generated code is a real cognitive change.
Programming knowledge is still a prerequisite. Claude Code amplifies skilled developers; it does not replace the skill.
How long until my team sees productivity gains? Immediate wins tend to show up first on well-scoped tasks. Consistent improvements usually require sustained iteration on CLAUDE.md, skills, and team habits.
Can skills run across Claude.ai, Claude Code CLI, and the API? Yes. Skills defined using this architecture can cross-platform support without modification.
What IDE support exists?VS Code support is the only officially supported IDE integration. Community implementations for other editors exist but lack official support.
How do I prevent skills from bloating the context window? Progressive disclosure handles this automatically: Level 1 metadata loads at startup; full instructions load only when relevant. For large libraries (50+ skills), implement the Tool Search Tool pattern for on-demand discovery.
Are skills deterministic? Will they produce the same output every time? No. AI inference is inherently non-deterministic. If you need guaranteed identical outputs for compliance or regulatory requirements, traditional scripting is the right choice.
What happens to my skills during long sessions? Claude auto-compacts when context fills. Skill instructions loaded early can be lost. Use /compact proactively and start fresh sessions for new tasks.
Can I use this with GitLab instead of GitHub?GitLab CI/CD integration exists but is in beta, maintained by GitLab. Budget for additional validation effort.
What's the most impactful first step? Set up your CLAUDE.md so Claude starts every session with stable, repo-specific context (memory docs).
Here's what this comes down to.
Your team's best practices already exist: in people's heads, in Slack threads, in that one Google Doc from 2023 that half the team has never seen. Skills turn that institutional knowledge into something executable, version-controlled, and improveable.
Start with your highest-friction repeated task. Build one skill. Commit it. Let the team use it, break it, improve it. Treat SKILL.md the way you'd treat any other code: review it, iterate on it, keep it honest.
The tools are here. The pattern, git-committed markdown that directly shapes AI behavior, has emerged across every major AI coding assistant (Copilot instructions). It's not going away.
The question is whether your workflows stay locked in people's heads or start living in your repository, where everyone can use them and build on them.
One SKILL.md at a time.

Sergey Kaplich