Lesson 6 - Platform-Specific Enhancements

Respect the open standard first, then layer optional features for Claude Code, GitHub Copilot, Cursor, and other tools — without breaking portability.

Duration: 1.5-2 hours

Learning Objectives

By the end of this lesson, you will be able to:

  • Use Claude Code extensions: $ARGUMENTS, paths-based auto-activation, context forking, and invocation controls.
  • Understand how GitHub Copilot, Cursor, and Gemini CLI detect and load skills.
  • Design skills with a portability-first mindset so platform features degrade gracefully.
  • Document compatibility directly in SKILL.md frontmatter.

Videos

Claude Code: The Most Feature-Rich Platform

Explore Claude Code's powerful extensions and show how to add them incrementally to the deploy-checklist skill.

Duration: 12 minutes

Video coming soon

GitHub Copilot, Cursor, and the Broader Ecosystem

How other platforms handle skills, where they store them, and which extensions they support.

Duration: 7 minutes

Video coming soon

The Portability-First Playbook

A disciplined approach: build on the standard first, add platform features as opt-in enhancements.

Duration: 6 minutes

Video coming soon

Key Concepts

Claude Code Extensions Reference

**Frontmatter extensions (Claude Code only):** ```yaml --- name: deploy-checklist description: Run pre-deploy validations before shipping backend services paths: # Auto-activate for matching files - "scripts/deploy/**" - "infrastructure/**" argument-hint: <environment> # Help text shown in / menu user-invocable: true # Show in / menu (default: true) disable-model-invocation: false # Allow AI to auto-trigger (default: false) context: fork # Run in isolated subprocess model: sonnet # Override model for this skill --- ``` **String substitutions:** | Variable | Meaning | Example | |----------|---------|--------| | `$ARGUMENTS` | All user input after `/skill-name` | `/deploy-checklist staging` → `staging` | | `$ARGUMENTS[0]` | First positional argument | Same → `staging` | | `$1` | Shorthand for first argument | Same → `staging` | | `${CLAUDE_SKILL_DIR}` | Path to the skill's directory | For referencing bundled files | **Dynamic context injection:** ```markdown Current staged changes: !`git diff --staged` Current branch: !`git branch --show-current` ``` The !\`command\` syntax runs the command and injects its output into the prompt when the skill loads.

Platform Compatibility Matrix

| Feature | Open Standard | Claude Code | Copilot | Cursor | Gemini CLI | |---------|:---:|:---:|:---:|:---:|:---:| | SKILL.md + frontmatter | Yes | Yes | Yes | Yes | Yes | | scripts/ references/ assets/ | Yes | Yes | Yes | Yes | Yes | | /skill-name invocation | — | Yes | — | — | — | | $ARGUMENTS | — | Yes | — | — | — | | paths auto-activation | — | Yes | — | — | — | | context: fork | — | Yes | — | — | — | | Dynamic context !\`\` | — | Yes | — | — | — | | Multiple skill directories | — | — | Yes | Yes | — | **Key insight:** The standard features (top two rows) work everywhere. Platform features (bottom rows) are bonus.

Running Example: deploy-checklist with Platform Enhancements

Here's the deploy-checklist frontmatter after adding Claude Code features: ```markdown --- name: deploy-checklist description: Run pre-deploy validations before shipping backend services to production paths: - "scripts/deploy/**" - "infrastructure/**" - "**/Dockerfile" argument-hint: <environment> compatibility: Tested on Claude Code 1.x, GitHub Copilot (VS Code) --- # Deploy Checklist ## Trigger Activate when deploying backend services. Target environment: $ARGUMENTS (default to 'staging' if not specified). ## Workflow 1. Run `scripts/pre-deploy-check.sh` 2. Read `assets/required-env-vars.json` — verify vars for the target environment ... ``` **On Claude Code:** `/deploy-checklist production` auto-activates near deploy files, accepts environment as argument. **On Copilot/Cursor:** Works identically — just activated manually or by AI matching the description. $ARGUMENTS ignored gracefully.

Troubleshooting Cross-Platform Issues

When a skill works on one platform but not another: 1. **Check the directory path** — is the skill in the right location for that tool? 2. **Validate frontmatter** — YAML typos silently break activation. 3. **Test without platform features** — strip Claude Code extensions and test the baseline. 4. **Check script compatibility** — bash scripts may behave differently on macOS vs. Linux vs. Windows. 5. **Review logs** — many tools show which skills they discovered at startup. Add findings to your compatibility notes. Future you (and your team) will thank you.

Common Mistakes & Pitfalls

Building platform-specific skills first

Start with the standard. If your skill relies on $ARGUMENTS to function, it won't work in Copilot or Cursor. Standard first, enhancements second.

Assuming all features work everywhere

$ARGUMENTS, context: fork, and paths are Claude Code-only. They're silently ignored (not errors) on other platforms, but your skill must work without them.

Not providing defaults for platform features

If $ARGUMENTS is empty (non-Claude Code tools), your skill should still work. Add a default: 'Target environment: $ARGUMENTS (default to staging if not specified).'

Over-using dynamic context injection

!`command` runs on every skill load. Keep commands fast and outputs small. !`git log --oneline -1000` will bloat the context.

Claiming cross-platform without testing

If you say 'works on Claude Code and Copilot', test on both. Use the compatibility field to document what you've actually verified.

Exercises

Exercise 1: Add One Platform Enhancement

20 minutes

Choose a platform feature (e.g., $ARGUMENTS, paths, context: fork) and add it to your skill. Test it end-to-end.

Expected Output:

Updated SKILL.md frontmatter plus test evidence showing the feature working.

Success Criteria:

  • Platform feature added to frontmatter or instruction body.
  • Skill still works WITHOUT the platform feature (graceful degradation tested).
  • Test log shows the feature working on your primary tool.
  • Usage documented inside SKILL.md (e.g., how to pass arguments).

Exercise 2: Cross-Platform Smoke Test

20 minutes

Test your skill on a second platform or mode (e.g., Copilot if you built in Claude Code, or vice versa). Log any differences.

Expected Output:

Compatibility notes documenting what worked and what differed.

Success Criteria:

  • Tested on at least one additional platform.
  • Core workflow verified working without platform extensions.
  • Documented any behavioral differences.
  • Updated the compatibility field in frontmatter.

Exercise 3: Write Compatibility Documentation

10 minutes

Add the compatibility field to frontmatter and document platform-specific usage in SKILL.md.

Expected Output:

Updated SKILL.md with compatibility info and platform-specific usage notes.

Success Criteria:

  • Compatibility field lists platforms and versions tested.
  • Any platform-specific features documented with usage examples.
  • Fallback behavior described for tools that don't support extensions.
  • Changes committed to git.

Lesson Reflection

Take a moment to reflect on what you've learned:

  • 1. Which platform feature would benefit your skill the most? Why?
  • 2. Does your skill still work if ALL platform-specific features are removed?
  • 3. If a teammate uses a different AI tool than you, can they still use your skill?
  • 4. How would you test your skill on a platform you don't normally use?