Lesson 5 - Power Features: References, Scripts, and Assets

Go beyond plain instructions by bundling templates, automation scripts, and static data with your skill. Learn when to externalize, how to reference files safely, and how to keep everything maintainable.

Duration: 2-2.5 hours

Learning Objectives

By the end of this lesson, you will be able to:

  • Design a reference library that keeps SKILL.md focused while providing battle-tested templates and examples.
  • Author lightweight scripts the AI can invoke for validation or automation, with clear safeguards.
  • Package assets (JSON, CSV, configs) and reference them consistently from the instruction body.
  • Document every supporting file so maintainers understand why it exists.

Videos

When to Externalize: SKILL.md Isn't Enough

Recognize the thresholds for moving templates, checklists, and data out of SKILL.md and into supporting directories.

Duration: 6 minutes

Video coming soon

References in Action: Templates and Examples

Build a reference library for the deploy-checklist skill with a template and a good/bad example pair.

Duration: 10 minutes

Video coming soon

Scripts and Assets: Automation and Data

Add a validation script and a configuration asset to the deploy-checklist skill.

Duration: 8 minutes

Video coming soon

Key Concepts

Running Example: deploy-checklist with Supporting Files

Here's our deploy-checklist skill after adding all three directories: ``` deploy-checklist/ ├── SKILL.md # Instructions (now references supporting files) ├── references/ │ ├── report-template.md # The exact format for the deploy report │ └── example-good.md # A well-formatted example report ├── scripts/ │ └── pre-deploy-check.sh # Validates git state and migrations └── assets/ └── required-env-vars.json # Env vars to verify per environment ``` **Updated SKILL.md workflow (excerpt):** ```markdown ## Workflow 1. Run `scripts/pre-deploy-check.sh` — verify clean git state and no pending migrations. 2. Read `assets/required-env-vars.json` — verify all listed variables are set for the target environment. 3. Run the test suite and capture pass/fail counts. 4. Generate the deploy report following `references/report-template.md` exactly. 5. See `references/example-good.md` for tone and detail level. ``` The skill went from 'tell me what to check' to 'check it for me and report the results.'

Reference Library Patterns

Organize references/ for clarity: | File Pattern | Purpose | Example | |-------------|---------|--------| | `template-*.md` | Output structure the AI must follow | `template-report.md` | | `example-good.md` | What great output looks like | A completed deploy report | | `example-bad.md` | What to avoid (anti-patterns) | A sloppy, incomplete report | | `guide-*.md` | Decision guides, style references | `guide-severity-levels.md` | Keep each file focused on ONE thing. When references/ grows beyond 5-6 files, consider whether the skill is trying to do too much.

Script Guardrails

Keep scripts safe and predictable: ```bash #!/bin/bash # pre-deploy-check.sh # Purpose: Verify clean git state and no pending migrations before deploy # Inputs: None # Outputs: Status report to stdout, exit code 0 (pass) or 1 (fail) set -euo pipefail # Exit on error, undefined vars, pipe failures echo "Checking for uncommitted changes..." if ! git diff --quiet; then echo "FAIL: Uncommitted changes found" exit 1 fi echo "Checking for pending migrations..." # Add your migration check here echo "PASS: All pre-deploy checks passed" exit 0 ``` **Rules:** - Start with a comment: purpose, inputs, outputs. - Use `set -euo pipefail` for safety. - Echo progress so the AI can report what happened. - Exit 0 for success, non-zero for failure. - Never pipe user arguments into destructive commands without validation.

Referencing Files from SKILL.md

Always use **relative paths** from the skill directory: ```markdown # Good — relative paths Read `references/report-template.md` for the output format. Run `scripts/pre-deploy-check.sh` to validate pre-conditions. Load environment requirements from `assets/required-env-vars.json`. # Bad — absolute paths (break when shared) Read `/Users/jane/templates/report.md` # Bad — external URLs (create dependency on network) Fetch the template from https://example.com/template.md ``` **Why relative paths matter:** - They work on any machine (your teammate's paths are different from yours) - They keep the skill self-contained — everything it needs is in the folder - They work with progressive disclosure — the AI knows exactly where to look

Supporting Files Documentation Table

Add this to the bottom of your SKILL.md so humans and AI can discover all files: ```markdown ## Supporting Files | File | Purpose | When to Read | |------|---------|-------------| | references/report-template.md | Output format template | Step 4 of workflow | | references/example-good.md | Tone and detail reference | Step 5 of workflow | | scripts/pre-deploy-check.sh | Git + migration validation | Step 1 of workflow | | assets/required-env-vars.json | Required env vars per environment | Step 2 of workflow | ``` This table doubles as a maintenance checklist — if a file is missing from this table, it's either undocumented or unnecessary.

Common Mistakes & Pitfalls

Adding directories before you need them

An empty scripts/ folder adds complexity without value. Wait until you have an actual script to write. Most skills start with just SKILL.md.

Using absolute paths to reference files

'references/template.md' works everywhere. '/Users/jane/skills/template.md' breaks the moment someone else uses the skill.

Putting everything in one massive reference file

One template, one style guide, and three examples in a single file is hard to maintain. Keep each file focused on one thing.

Forgetting to make scripts executable

Scripts need execute permissions: 'chmod +x scripts/check.sh'. Without this, the AI can't run them and you'll get a confusing error.

Not documenting supporting files

Without a Supporting Files table, maintainers have to guess what each file does. A 5-line table prevents hours of confusion.

Scripts that modify files outside the skill directory

Skills should validate and report, not silently change things. If a script needs to modify files, make that explicit in SKILL.md so it's reviewable.

Exercises

Exercise 1: Build a Reference Template

20 minutes

Create references/ with a template file that defines your skill's output format. Update SKILL.md to reference it explicitly.

Expected Output:

A references/ directory with a template file, and SKILL.md updated to use it.

Success Criteria:

  • Reference file contains a clear, complete output template.
  • SKILL.md workflow includes a step: 'Read references/[file] and follow its structure.'
  • Tested the skill — confirmed the AI follows the template consistently.
  • Added the file to the Supporting Files table in SKILL.md.

Exercise 2: Add a Validation Script

25 minutes

Create scripts/ with a simple validation script. Wire it into your SKILL.md workflow. Test it manually first, then via the skill.

Expected Output:

A scripts/ directory with an executable script, referenced from SKILL.md.

Success Criteria:

  • Script has a descriptive header comment (purpose, inputs, outputs).
  • Script is executable (chmod +x) and tested manually.
  • SKILL.md workflow includes a step to run the script.
  • Tested the full skill workflow including the script.

Exercise 3: Package an Asset

20 minutes

Add one static data file to assets/ that the AI needs to reference during the workflow (JSON config, checklist data, etc.).

Expected Output:

An assets/ directory with a data file, referenced from SKILL.md.

Success Criteria:

  • Asset file contains structured data the AI reads during the workflow.
  • SKILL.md includes a step: 'Read assets/[file] to determine...'
  • Test confirmed the AI used the asset data correctly.
  • File documented in the Supporting Files table.

Lesson Reflection

Take a moment to reflect on what you've learned:

  • 1. Does your skill need supporting files right now, or are you adding them prematurely?
  • 2. What repetitive validation in your workflow could be automated with a simple script?
  • 3. If you shared your skill with someone who's never seen it, which reference files would help them understand the expected output?
  • 4. How would adding a good/bad example pair change the consistency of your skill's output?