# Create `/establish-patterns` Skill for This Project Use this prompt to bootstrap the `/establish-patterns` skill in any project. Run it after `/tech-plan` has produced architecture specs and before `/init-project` or first feature work. --- ## Prompt Create a project-level `/establish-patterns` skill in this repo at `.claude/skills/establish-patterns/`. The skill proves out key architecture patterns with throwaway spike code, documents what works, then discards the spikes. The value product is documentation, not code. ### Skill Structure ``` .claude/skills/establish-patterns/ ├── SKILL.md └── references/ └── pattern-documentation-guide.md ``` ### SKILL.md Spec **Frontmatter:** ```yaml --- name: establish-patterns description: > Prove architecture patterns with spike code, document what works, then discard the spikes. This skill should be used when the user asks to "establish patterns", "prove out patterns", "spike the architecture", "test the patterns", "prototype the key flows", "validate the architecture", "does this stack actually work", "prove this out", "try the key patterns", or wants to validate that architectural decisions work before writing production code. Also triggers when the user says "let's make sure this works before we build it", "spike it out", "prove the architecture", or "test the tech choices". Can be invoked by other skills for autonomous pattern validation. allowed-tools: - Read - Glob - Grep - Bash - Write - Edit - WebSearch - AskUserQuestion - Agent argument-hint: "[module name, pattern area, or nothing to auto-detect]" --- ``` **Body:** Sits between `/tech-plan` (which makes architecture and stack decisions) and `/init-project` (which scaffolds the repo). This skill's job is to prove that the architectural decisions actually work by writing throwaway spike code, documenting the findings, then discarding the spikes. The value product is documentation, not code. **Process (10 steps):** **Step 1: Load Architecture Context** 1. **Argument provided** (`$ARGUMENTS`): Treat as a module name or pattern area to focus on — load specs for that specific module/area first, then surrounding context 2. **No argument**: Load all available architecture context and auto-detect which patterns need proving Locate architecture artifacts in this order: - `docs/specs/architecture.md` — tech stack, component diagram, conventions - `docs/specs/modules/*.md` — module interfaces and dependencies - `docs/adr/*.md` — decision rationale and constraints - `CLAUDE.md` — any existing patterns or conventions If no architecture spec is found, tell the user — suggest running `/tech-plan` first. Don't spike blind. **Step 2: Determine Mode** - **Interactive** (user invoked directly): Walk through pattern identification with user, ask which patterns to prioritize, confirm before spiking each one - **Autonomous** (another skill delegated here): Identify all critical patterns, spike them in priority order, produce documentation without pauses - **Detection**: If the conversation shows another skill (e.g., `/tech-plan`) explicitly invoked `/establish-patterns`, operate in autonomous mode. Otherwise, default to interactive. **Step 3: Identify Critical Patterns** Scan the architecture spec and module specs for patterns that need proving. Categorize: | Category | Examples | |----------|---------| | **Data access** | DB queries, ORM patterns, migrations, seed data | | **API patterns** | Endpoint structure, request validation, error responses, auth middleware | | **Component patterns** | UI component structure, state management, routing | | **Integration patterns** | External API calls, webhook handling, queue producers/consumers | | **Cross-cutting** | Logging, error handling, config management, testing setup | For each pattern, assess: - **Risk level**: High (architecture depends on it), Medium (important but alternatives exist), Low (straightforward) - **Dependency**: Does this pattern depend on another being proven first? - **Claim coverage**: Which technical claims from the architecture spec does this prove? In interactive mode: present the pattern list and ask user to confirm priorities. In autonomous mode: order by risk level (high first), resolve dependencies. **Step 4: Create Spike Workspace** - Create a temporary spike directory: `_spikes/` at project root - Add `_spikes/` to `.gitignore` if not already present - Each spike gets its own subdirectory: `_spikes/<pattern-name>/` - Spikes are minimal — just enough code to prove the pattern works **Step 5: Implement Spikes** For each pattern (in priority order): 1. Write the minimal code that proves the pattern works 2. Run it — verify it actually works (don't just write code and assume) 3. Note what worked, what didn't, and any gotchas discovered 4. Note any deviations from what the architecture spec assumed Keep spikes small. A spike for "API endpoint pattern" might be one route with request validation, error handling, and a test. Not a full API. In interactive mode: show the user each spike result before moving to the next. Ask if the pattern feels right or needs adjustment. **Step 6: Validate Pattern Integration** After individual spikes work, verify key patterns work together: - Can the API pattern talk to the data access pattern? - Does the error handling pattern propagate correctly across layers? - Does the testing pattern work for the component patterns? This doesn't need to be exhaustive — focus on the 2-3 most important integration points. **Step 7: Document Patterns in CLAUDE.md** For each proven pattern, add to the project's CLAUDE.md using this structure: ## Patterns ### [Pattern Name] [1-2 sentence description of when to use this pattern] ```[language] // Concrete code example — copy-pasteable, not pseudocode // Include the key structural elements // Note any gotchas in comments ``` **Gotchas:** - [Thing that wasn't obvious and could trip someone up] Documentation guidance: - Code examples must be concrete and copy-pasteable — pseudocode doesn't help future sessions that need to replicate the pattern exactly - Include gotchas — these are often the most valuable output because they capture things the architecture spec couldn't predict - Keep each pattern to 20-40 lines in CLAUDE.md (link to module spec for details) — CLAUDE.md is loaded into every conversation, so bloat has a real cost - Match the existing CLAUDE.md structure and conventions - Read `references/pattern-documentation-guide.md` for the full documentation quality checklist and anti-patterns **Step 8: Update Module Specs** For each module spec affected by the proven patterns: - Add or update the "Key Implementation Considerations" section - Reference the proven pattern from CLAUDE.md - Update the testing strategy based on what the spike revealed - Note any interface changes discovered during spiking **Step 9: Discard Spikes** After documentation is complete: - Delete the `_spikes/` directory entirely - The patterns live in documentation now — production code gets written cleanly during TDD - Remove `_spikes/` from `.gitignore` (it was temporary) In interactive mode: confirm with user before deleting. In autonomous mode: delete after all documentation is written. **Step 10: Output Summary** ``` ## Patterns Established: [Project Name] ### Patterns Proven - [Pattern]: [1-line summary] — documented in CLAUDE.md - Claims covered: [which architecture claims this validates] ### Patterns Deferred - [Pattern]: [why deferred — low risk, blocked on dependency, etc.] ### Gotchas Discovered - [Surprising finding that affects implementation] ### Files Updated - CLAUDE.md — [N] patterns added - docs/specs/modules/[name].md — [what changed] ### Architecture Feedback - [Any findings that should update the architecture spec or ADRs] ### Suggested Next Steps - `/init-project` — scaffold the repo (if not yet created) - `/plan-feature` — create implementation plan for first feature ``` ### references/pattern-documentation-guide.md Spec This reference file should contain a table of contents at the top, followed by these sections: **Table of Contents:** - Pattern Categories - Spike Design Principles - Documentation Quality Checklist - Anti-Patterns to Avoid - Adapting to Project Type **Pattern Categories** — expanded descriptions of each category from Step 3, with examples of what to look for in the architecture spec. **Spike Design Principles:** - Minimal: prove one thing per spike - Runnable: must execute, not just compile - Disposable: assume this code will be deleted - Documented: capture learnings, not just code - Integrated: test at least one cross-pattern interaction **Documentation Quality Checklist:** - [ ] Code example is concrete and copy-pasteable (not pseudocode) - [ ] Example includes error handling (not just happy path) - [ ] Gotchas section captures non-obvious learnings - [ ] Pattern is scoped to one concern (not a multi-pattern mega-example) - [ ] CLAUDE.md entry is 20-40 lines (detailed docs in module spec) - [ ] Module spec updated with implementation considerations from spike - [ ] Testing approach validated by spike (not just assumed) **Anti-Patterns to Avoid:** - Writing production code in spikes (defeats the purpose — spike code is thrown away) - Skipping the "run it" step (untested spikes prove nothing) - Documenting only happy paths (gotchas are the most valuable output) - Over-spiking (if a pattern is well-known and low-risk, skip it) - Under-documenting (if you delete the spike but the CLAUDE.md entry is vague, you lost information) **Adapting to Project Type** — guidance on which pattern categories matter most: - Web app (fullstack): API + component + data access patterns are critical - API service: data access + integration + error handling patterns are critical - CLI tool: config management + error handling + testing patterns are critical - Library/SDK: public API surface + testing + documentation patterns are critical --- ## After Running This Prompt The skill will be installed at `.claude/skills/establish-patterns/` in the current project. It's invoked with `/establish-patterns` and follows the same interactive/autonomous mode convention as the global skills.