Lesson 3 - Crafting High-Fidelity Instructions
Turn your skeleton into actionable instructions that produce consistent results. Learn to brief AI like a capable colleague, embed quality gates, and iterate using real test feedback.
Duration: 2-2.5 hours
Learning Objectives
By the end of this lesson, you will be able to:
- ✓Structure instructions with the Trigger → Workflow → Constraints → Output framework.
- ✓Embed quality gates, acceptance criteria, and examples directly inside SKILL.md.
- ✓Use markdown formatting for clarity without exceeding the 500-line guidance.
- ✓Run iterative tests, capture deviations, and revise instructions to close gaps.
Videos
Manager Mindset: Briefing a Literal Genius
The mindset shift for writing great instructions — you're defining a contract the AI must execute consistently.
Duration: 8 minutes
Video coming soon
The Four-Part Instruction System
Deep dive into Trigger → Workflow → Constraints → Output, with a live build of the deploy-checklist skill.
Duration: 10 minutes
Video coming soon
Testing Loops and Iteration
Install your skill, test it with real inputs, capture deviations, and translate findings into instruction edits.
Duration: 7 minutes
Video coming soon
Key Concepts
The Four-Part Template
Good vs. Bad Instructions — Side by Side
Quality Gates and Acceptance Criteria
Writing Tips for All Backgrounds
Testing Log Template
Common Mistakes & Pitfalls
❌ Writing vague instructions
'Review the code' gives inconsistent results. 'Check for unused imports, functions over 30 lines, and silent error handling' gives the same result every time.
❌ Trying to cover every edge case upfront
A skill that handles 80% of cases consistently is more valuable than one that tries to handle 100% and becomes a confusing wall of text.
❌ Skipping examples
Examples are the single most effective way to communicate what you want. One concrete example of good output eliminates entire paragraphs of description.
❌ Being polite instead of precise
'It would be nice if you could perhaps check...' → 'Check for X. Flag any instance. Return results as a checklist.' Direct language gets direct results.
❌ Not specifying the output format
Without a prescribed format, the AI invents one — differently each time. Specify: 'Return a markdown table with columns: File, Line, Issue, Fix.'
❌ Testing only once
One test doesn't catch inconsistencies. Run at least 3 tests with different inputs. The gap between run 1 and run 3 reveals ambiguity.
Exercises
Exercise 1: Populate Your Four-Part Instructions
30 minutesReplace the placeholder sections in your SKILL.md with real instructions using the Trigger → Workflow → Constraints → Output framework. Include at least one example in the Output section.
Expected Output:
A complete SKILL.md with all four sections populated and at least one example output.
Success Criteria:
- •Trigger references a concrete scenario (not generic).
- •Workflow has at least 4 numbered steps, each starting with a verb.
- •Constraints include at least 2 things the AI must NOT do.
- •Output section includes a concrete example showing the expected format.
Exercise 2: Live Test Loop (3 Iterations)
35 minutesInstall your skill, test it with a real scenario, log the results, fix deviations, and re-test. Repeat at least 3 times.
Expected Output:
A completed test log showing 3 iterations with deviations found and fixes applied.
Success Criteria:
- •Tested at least 3 times with real (not toy) inputs.
- •Each test logged: scenario, expected result, actual result, deviation, fix.
- •At least 3 instruction edits traced directly to observed deviations.
- •Final test confirms the fixes work.
Exercise 3: Peer Review Swap
20 minutesExchange your SKILL.md with another learner (or read it aloud to yourself as if you've never seen it). Apply the 'new colleague' test. Would they know exactly what to do?
Expected Output:
Feedback notes from the reviewer plus your revision plan.
Success Criteria:
- •Reviewer identified at least one ambiguous instruction.
- •Reviewer confirmed the output example is clear.
- •You revised the ambiguous section based on feedback.
- •Updated SKILL.md committed to git.
Lesson Reflection
Take a moment to reflect on what you've learned:
- 1. Read your Lesson 2 skeleton. Where is each section vague enough that two people might interpret it differently?
- 2. Which of the four parts (Trigger, Workflow, Constraints, Output) do you find hardest to write? Why?
- 3. Think of a time someone misunderstood your instructions. What was missing that would have prevented it?
- 4. After testing your skill, what surprised you most about how the AI interpreted your instructions?