diff --git a/plugin/.claude-plugin/marketplace.json b/plugin/.claude-plugin/marketplace.json new file mode 100644 index 0000000000..a002e1d4a0 --- /dev/null +++ b/plugin/.claude-plugin/marketplace.json @@ -0,0 +1,33 @@ +{ + "name": "speckit-plugins", + "owner": { + "name": "Hamed Mohammadpour", + "url": "https://github.com/hamedmp" + }, + "metadata": { + "description": "Spec-Driven Development (SDD) workflow plugins for Claude Code", + "version": "0.1.0" + }, + "plugins": [ + { + "name": "speckit", + "source": ".", + "description": "Spec-Driven Development workflow - generate specifications, plans, tasks, and implement features systematically", + "version": "0.1.0", + "author": { + "name": "Hamed Mohammadpour" + }, + "homepage": "https://github.com/HamedMP/spec-kit", + "repository": "https://github.com/HamedMP/spec-kit", + "license": "MIT", + "keywords": [ + "specification", + "planning", + "sdd", + "spec-driven-development", + "workflow" + ], + "category": "development-workflow" + } + ] +} diff --git a/plugin/.claude-plugin/plugin.json b/plugin/.claude-plugin/plugin.json new file mode 100644 index 0000000000..a93a31a333 --- /dev/null +++ b/plugin/.claude-plugin/plugin.json @@ -0,0 +1,25 @@ +{ + "name": "speckit", + "version": "0.1.0", + "description": "Spec-Driven Development (SDD) workflow for Claude Code - Generate specifications, plans, tasks, and implement features systematically", + "author": { + "name": "spec-kit" + }, + "contributors": [ + { + "name": "Hamed Mohammadpour", + "url": "https://github.com/hamedmp" + } + ], + "repository": "https://github.com/github/spec-kit", + "license": "MIT", + "keywords": [ + "specification", + "planning", + "development-workflow", + "sdd", + "spec-driven-development", + "project-management", + "task-management" + ] +} diff --git a/plugin/commands/analyze.md b/plugin/commands/analyze.md new file mode 100644 index 0000000000..da388dd474 --- /dev/null +++ b/plugin/commands/analyze.md @@ -0,0 +1,174 @@ +--- +description: Perform a non-destructive cross-artifact consistency and quality analysis across spec.md, plan.md, and tasks.md after task generation. +scripts: + sh: .specify/scripts/bash/check-prerequisites.sh --json --require-tasks --include-tasks + ps: .specify/scripts/powershell/check-prerequisites.ps1 -Json -RequireTasks -IncludeTasks +--- + +## User Input + +```text +$ARGUMENTS +``` + +You **MUST** consider the user input before proceeding (if not empty). + +## Goal + +Identify inconsistencies, duplications, ambiguities, and underspecified items across the three core artifacts (`spec.md`, `plan.md`, `tasks.md`) before implementation. This command MUST run only after `/speckit:tasks` has successfully produced a complete `tasks.md`. + +## Operating Constraints + +**STRICTLY READ-ONLY**: Do **not** modify any files. Output a structured analysis report. Offer an optional remediation plan (user must explicitly approve before any follow-up editing commands would be invoked manually). + +**Constitution Authority**: The project constitution (`.specify/memory/constitution.md`) is **non-negotiable** within this analysis scope. Constitution conflicts are automatically CRITICAL and require adjustment of the spec, plan, or tasks - not dilution, reinterpretation, or silent ignoring of the principle. + +## Execution Steps + +### 1. Initialize Analysis Context + +Run `{SCRIPT}` once from repo root and parse JSON for FEATURE_DIR and AVAILABLE_DOCS. Derive absolute paths: + +- SPEC = FEATURE_DIR/spec.md +- PLAN = FEATURE_DIR/plan.md +- TASKS = FEATURE_DIR/tasks.md + +Abort with an error message if any required file is missing (instruct the user to run missing prerequisite command). +For single quotes in args like "I'm Groot", use escape syntax: e.g 'I'\''m Groot' (or double-quote if possible: "I'm Groot"). + +### 2. Load Artifacts (Progressive Disclosure) + +Load only the minimal necessary context from each artifact: + +**From spec.md:** +- Overview/Context +- Functional Requirements +- Non-Functional Requirements +- User Stories +- Edge Cases (if present) + +**From plan.md:** +- Architecture/stack choices +- Data Model references +- Phases +- Technical constraints + +**From tasks.md:** +- Task IDs +- Descriptions +- Phase grouping +- Parallel markers [P] +- Referenced file paths + +**From constitution:** +- Load `.specify/memory/constitution.md` for principle validation + +### 3. Build Semantic Models + +Create internal representations (do not include raw artifacts in output): + +- **Requirements inventory**: Each functional + non-functional requirement with a stable key +- **User story/action inventory**: Discrete user actions with acceptance criteria +- **Task coverage mapping**: Map each task to one or more requirements or stories +- **Constitution rule set**: Extract principle names and MUST/SHOULD normative statements + +### 4. Detection Passes (Token-Efficient Analysis) + +Focus on high-signal findings. Limit to 50 findings total; aggregate remainder in overflow summary. + +#### A. Duplication Detection +- Identify near-duplicate requirements +- Mark lower-quality phrasing for consolidation + +#### B. Ambiguity Detection +- Flag vague adjectives (fast, scalable, secure, intuitive, robust) lacking measurable criteria +- Flag unresolved placeholders (TODO, TKTK, ???, ``, etc.) + +#### C. Underspecification +- Requirements with verbs but missing object or measurable outcome +- User stories missing acceptance criteria alignment +- Tasks referencing files or components not defined in spec/plan + +#### D. Constitution Alignment +- Any requirement or plan element conflicting with a MUST principle +- Missing mandated sections or quality gates from constitution + +#### E. Coverage Gaps +- Requirements with zero associated tasks +- Tasks with no mapped requirement/story +- Non-functional requirements not reflected in tasks (e.g., performance, security) + +#### F. Inconsistency +- Terminology drift (same concept named differently across files) +- Data entities referenced in plan but absent in spec (or vice versa) +- Task ordering contradictions +- Conflicting requirements + +### 5. Severity Assignment + +Use this heuristic to prioritize findings: + +- **CRITICAL**: Violates constitution MUST, missing core spec artifact, or requirement with zero coverage that blocks baseline functionality +- **HIGH**: Duplicate or conflicting requirement, ambiguous security/performance attribute, untestable acceptance criterion +- **MEDIUM**: Terminology drift, missing non-functional task coverage, underspecified edge case +- **LOW**: Style/wording improvements, minor redundancy not affecting execution order + +### 6. Produce Compact Analysis Report + +Output a Markdown report (no file writes) with the following structure: + +## Specification Analysis Report + +| ID | Category | Severity | Location(s) | Summary | Recommendation | +|----|----------|----------|-------------|---------|----------------| +| A1 | Duplication | HIGH | spec.md:L120-134 | Two similar requirements ... | Merge phrasing; keep clearer version | + +(Add one row per finding; generate stable IDs prefixed by category initial.) + +**Coverage Summary Table:** + +| Requirement Key | Has Task? | Task IDs | Notes | +|-----------------|-----------|----------|-------| + +**Constitution Alignment Issues:** (if any) + +**Unmapped Tasks:** (if any) + +**Metrics:** +- Total Requirements +- Total Tasks +- Coverage % (requirements with >=1 task) +- Ambiguity Count +- Duplication Count +- Critical Issues Count + +### 7. Provide Next Actions + +At end of report, output a concise Next Actions block: + +- If CRITICAL issues exist: Recommend resolving before `/speckit:implement` +- If only LOW/MEDIUM: User may proceed, but provide improvement suggestions +- Provide explicit command suggestions + +### 8. Offer Remediation + +Ask the user: "Would you like me to suggest concrete remediation edits for the top N issues?" (Do NOT apply them automatically.) + +## Operating Principles + +### Context Efficiency +- **Minimal high-signal tokens**: Focus on actionable findings, not exhaustive documentation +- **Progressive disclosure**: Load artifacts incrementally; don't dump all content into analysis +- **Token-efficient output**: Limit findings table to 50 rows; summarize overflow +- **Deterministic results**: Rerunning without changes should produce consistent IDs and counts + +### Analysis Guidelines +- **NEVER modify files** (this is read-only analysis) +- **NEVER hallucinate missing sections** (if absent, report them accurately) +- **Prioritize constitution violations** (these are always CRITICAL) +- **Use examples over exhaustive rules** (cite specific instances, not generic patterns) +- **Report zero issues gracefully** (emit success report with coverage statistics) + +## Context + +{ARGS} diff --git a/plugin/commands/checklist.md b/plugin/commands/checklist.md new file mode 100644 index 0000000000..03892de768 --- /dev/null +++ b/plugin/commands/checklist.md @@ -0,0 +1,127 @@ +--- +description: Generate a custom checklist for the current feature based on user requirements. +scripts: + sh: .specify/scripts/bash/check-prerequisites.sh --json + ps: .specify/scripts/powershell/check-prerequisites.ps1 -Json +--- + +## Checklist Purpose: "Unit Tests for English" + +**CRITICAL CONCEPT**: Checklists are **UNIT TESTS FOR REQUIREMENTS WRITING** - they validate the quality, clarity, and completeness of requirements in a given domain. + +**NOT for verification/testing**: +- NOT "Verify the button clicks correctly" +- NOT "Test error handling works" +- NOT "Confirm the API returns 200" +- NOT checking if code/implementation matches the spec + +**FOR requirements quality validation**: +- "Are visual hierarchy requirements defined for all card types?" (completeness) +- "Is 'prominent display' quantified with specific sizing/positioning?" (clarity) +- "Are hover state requirements consistent across all interactive elements?" (consistency) +- "Are accessibility requirements defined for keyboard navigation?" (coverage) +- "Does the spec define what happens when logo image fails to load?" (edge cases) + +**Metaphor**: If your spec is code written in English, the checklist is its unit test suite. + +## User Input + +```text +$ARGUMENTS +``` + +You **MUST** consider the user input before proceeding (if not empty). + +## Execution Steps + +1. **Setup**: Run `{SCRIPT}` from repo root and parse JSON for FEATURE_DIR and AVAILABLE_DOCS list. + - All file paths must be absolute. + - For single quotes in args like "I'm Groot", use escape syntax: e.g 'I'\''m Groot' (or double-quote if possible: "I'm Groot"). + +2. **Clarify intent (dynamic)**: Derive up to THREE initial contextual clarifying questions (no pre-baked catalog). They MUST: + - Be generated from the user's phrasing + extracted signals from spec/plan/tasks + - Only ask about information that materially changes checklist content + - Be skipped individually if already unambiguous in `$ARGUMENTS` + +3. **Understand user request**: Combine `$ARGUMENTS` + clarifying answers: + - Derive checklist theme (e.g., security, review, deploy, ux) + - Consolidate explicit must-have items mentioned by user + - Map focus selections to category scaffolding + - Infer any missing context from spec/plan/tasks (do NOT hallucinate) + +4. **Load feature context**: Read from FEATURE_DIR: + - spec.md: Feature requirements and scope + - plan.md (if exists): Technical details, dependencies + - tasks.md (if exists): Implementation tasks + +5. **Generate checklist** - Create "Unit Tests for Requirements": + - Create `FEATURE_DIR/checklists/` directory if it doesn't exist + - Generate unique checklist filename: + - Use short, descriptive name based on domain (e.g., `ux.md`, `api.md`, `security.md`) + - Format: `[domain].md` + - If file exists, append to existing file + - Number items sequentially starting from CHK001 + - Each `/speckit:checklist` run creates a NEW file (never overwrites existing checklists) + + **CORE PRINCIPLE - Test the Requirements, Not the Implementation**: + Every checklist item MUST evaluate the REQUIREMENTS THEMSELVES for: + - **Completeness**: Are all necessary requirements present? + - **Clarity**: Are requirements unambiguous and specific? + - **Consistency**: Do requirements align with each other? + - **Measurability**: Can requirements be objectively verified? + - **Coverage**: Are all scenarios/edge cases addressed? + + **Category Structure** - Group items by requirement quality dimensions: + - **Requirement Completeness** (Are all necessary requirements documented?) + - **Requirement Clarity** (Are requirements specific and unambiguous?) + - **Requirement Consistency** (Do requirements align without conflicts?) + - **Acceptance Criteria Quality** (Are success criteria measurable?) + - **Scenario Coverage** (Are all flows/cases addressed?) + - **Edge Case Coverage** (Are boundary conditions defined?) + - **Non-Functional Requirements** (Performance, Security, Accessibility, etc. - are they specified?) + - **Dependencies & Assumptions** (Are they documented and validated?) + - **Ambiguities & Conflicts** (What needs clarification?) + + **HOW TO WRITE CHECKLIST ITEMS**: + + WRONG (Testing implementation): + - "Verify landing page displays 3 episode cards" + - "Test hover states work on desktop" + - "Confirm logo click navigates home" + + CORRECT (Testing requirements quality): + - "Are the exact number and layout of featured episodes specified?" [Completeness] + - "Is 'prominent display' quantified with specific sizing/positioning?" [Clarity] + - "Are hover state requirements consistent across all interactive elements?" [Consistency] + - "Are keyboard navigation requirements defined for all interactive UI?" [Coverage] + - "Is the fallback behavior specified when logo image fails to load?" [Edge Cases] + + **ITEM STRUCTURE**: + Each item should follow this pattern: + - Question format asking about requirement quality + - Focus on what's WRITTEN (or not written) in the spec/plan + - Include quality dimension in brackets [Completeness/Clarity/Consistency/etc.] + - Reference spec section `[Spec Section X.Y]` when checking existing requirements + - Use `[Gap]` marker when checking for missing requirements + + **Traceability Requirements**: + - MINIMUM: >=80% of items MUST include at least one traceability reference + - Each item should reference: spec section, or use markers: `[Gap]`, `[Ambiguity]`, `[Conflict]`, `[Assumption]` + + **Content Consolidation**: + - Soft cap: If raw candidate items > 40, prioritize by risk/impact + - Merge near-duplicates checking the same requirement aspect + - If >5 low-impact edge cases, create one item: "Are edge cases X, Y, Z addressed in requirements? [Coverage]" + +6. **Structure Reference**: Generate the checklist following the canonical template in `.specify/templates/checklist-template.md` for title, meta section, category headings, and ID formatting. If template is unavailable, use: H1 title, purpose/created meta lines, `##` category sections containing `- [ ] CHK### ` lines with globally incrementing IDs starting at CHK001. + +7. **Report**: Output full path to created checklist, item count, and remind user that each run creates a new file. Summarize: + - Focus areas selected + - Depth level + - Actor/timing + - Any explicit user-specified must-have items incorporated + +**Important**: Each `/speckit:checklist` command invocation creates a checklist file using short, descriptive names. This allows: +- Multiple checklists of different types (e.g., `ux.md`, `test.md`, `security.md`) +- Simple, memorable filenames that indicate checklist purpose +- Easy identification and navigation in the `checklists/` folder diff --git a/plugin/commands/clarify.md b/plugin/commands/clarify.md new file mode 100644 index 0000000000..c56f54814e --- /dev/null +++ b/plugin/commands/clarify.md @@ -0,0 +1,140 @@ +--- +description: Identify underspecified areas in the current feature spec by asking up to 5 highly targeted clarification questions and encoding answers back into the spec. +handoffs: + - label: Build Technical Plan + agent: speckit:plan + prompt: Create a plan for the spec. I am building with... +scripts: + sh: .specify/scripts/bash/check-prerequisites.sh --json --paths-only + ps: .specify/scripts/powershell/check-prerequisites.ps1 -Json -PathsOnly +--- + +## User Input + +```text +$ARGUMENTS +``` + +You **MUST** consider the user input before proceeding (if not empty). + +## Outline + +Goal: Detect and reduce ambiguity or missing decision points in the active feature specification and record the clarifications directly in the spec file. + +Note: This clarification workflow is expected to run (and be completed) BEFORE invoking `/speckit:plan`. If the user explicitly states they are skipping clarification (e.g., exploratory spike), you may proceed, but must warn that downstream rework risk increases. + +Execution steps: + +1. Run `{SCRIPT}` from repo root **once** (combined `--json --paths-only` mode / `-Json -PathsOnly`). Parse minimal JSON payload fields: + - `FEATURE_DIR` + - `FEATURE_SPEC` + - (Optionally capture `IMPL_PLAN`, `TASKS` for future chained flows.) + - If JSON parsing fails, abort and instruct user to re-run `/speckit:specify` or verify feature branch environment. + - For single quotes in args like "I'm Groot", use escape syntax: e.g 'I'\''m Groot' (or double-quote if possible: "I'm Groot"). + +2. Load the current spec file. Perform a structured ambiguity & coverage scan using this taxonomy. For each category, mark status: Clear / Partial / Missing. Produce an internal coverage map used for prioritization (do not output raw map unless no questions will be asked). + + Functional Scope & Behavior: + - Core user goals & success criteria + - Explicit out-of-scope declarations + - User roles / personas differentiation + + Domain & Data Model: + - Entities, attributes, relationships + - Identity & uniqueness rules + - Lifecycle/state transitions + - Data volume / scale assumptions + + Interaction & UX Flow: + - Critical user journeys / sequences + - Error/empty/loading states + - Accessibility or localization notes + + Non-Functional Quality Attributes: + - Performance (latency, throughput targets) + - Scalability (horizontal/vertical, limits) + - Reliability & availability (uptime, recovery expectations) + - Observability (logging, metrics, tracing signals) + - Security & privacy (authN/Z, data protection, threat assumptions) + - Compliance / regulatory constraints (if any) + + Integration & External Dependencies: + - External services/APIs and failure modes + - Data import/export formats + - Protocol/versioning assumptions + + Edge Cases & Failure Handling: + - Negative scenarios + - Rate limiting / throttling + - Conflict resolution (e.g., concurrent edits) + + Constraints & Tradeoffs: + - Technical constraints (language, storage, hosting) + - Explicit tradeoffs or rejected alternatives + + Terminology & Consistency: + - Canonical glossary terms + - Avoided synonyms / deprecated terms + + Completion Signals: + - Acceptance criteria testability + - Measurable Definition of Done style indicators + + Misc / Placeholders: + - TODO markers / unresolved decisions + - Ambiguous adjectives ("robust", "intuitive") lacking quantification + +3. Generate (internally) a prioritized queue of candidate clarification questions (maximum 5). Apply these constraints: + - Maximum of 10 total questions across the whole session. + - Each question must be answerable with EITHER: + - A short multiple-choice selection (2-5 distinct, mutually exclusive options), OR + - A one-word / short-phrase answer (explicitly constrain: "Answer in <=5 words"). + - Only include questions whose answers materially impact architecture, data modeling, task decomposition, test design, UX behavior, operational readiness, or compliance validation. + +4. Sequential questioning loop (interactive): + - Present EXACTLY ONE question at a time. + - For multiple-choice questions: + - **Analyze all options** and determine the **most suitable option** based on best practices + - Present your **recommended option prominently** at the top with clear reasoning + - Format as: `**Recommended:** Option [X] - ` + - Then render all options as a Markdown table + - After the table, add: `You can reply with the option letter (e.g., "A"), accept the recommendation by saying "yes" or "recommended", or provide your own short answer.` + - For short-answer style (no meaningful discrete options): + - Provide your **suggested answer** based on best practices and context. + - Format as: `**Suggested:** - ` + - Stop asking further questions when: + - All critical ambiguities resolved early, OR + - User signals completion ("done", "good", "no more"), OR + - You reach 5 asked questions. + - Never reveal future queued questions in advance. + +5. Integration after EACH accepted answer (incremental update approach): + - Maintain in-memory representation of the spec plus the raw file contents. + - For the first integrated answer in this session: + - Ensure a `## Clarifications` section exists + - Under it, create a `### Session YYYY-MM-DD` subheading for today. + - Append a bullet line immediately after acceptance: `- Q: -> A: `. + - Then immediately apply the clarification to the most appropriate section(s). + - Save the spec file AFTER each integration to minimize risk of context loss. + +6. Validation (performed after EACH write plus final pass): + - Clarifications session contains exactly one bullet per accepted answer (no duplicates). + - Total asked (accepted) questions <= 5. + - Updated sections contain no lingering vague placeholders the new answer was meant to resolve. + - Markdown structure valid; only allowed new headings: `## Clarifications`, `### Session YYYY-MM-DD`. + +7. Write the updated spec back to `FEATURE_SPEC`. + +8. Report completion: + - Number of questions asked & answered. + - Path to updated spec. + - Sections touched (list names). + - Coverage summary table listing each taxonomy category with Status. + - Suggested next command. + +Behavior rules: + +- If no meaningful ambiguities found, respond: "No critical ambiguities detected worth formal clarification." and suggest proceeding. +- If spec file missing, instruct user to run `/speckit:specify` first. +- Never exceed 5 total asked questions. +- Respect user early termination signals ("stop", "done", "proceed"). diff --git a/plugin/commands/constitution.md b/plugin/commands/constitution.md new file mode 100644 index 0000000000..f79afe105d --- /dev/null +++ b/plugin/commands/constitution.md @@ -0,0 +1,82 @@ +--- +description: Create or update the project constitution from interactive or provided principle inputs, ensuring all dependent templates stay in sync. +handoffs: + - label: Build Specification + agent: speckit:specify + prompt: Implement the feature specification based on the updated constitution. I want to build... +--- + +## User Input + +```text +$ARGUMENTS +``` + +You **MUST** consider the user input before proceeding (if not empty). + +## Outline + +You are updating the project constitution at `.specify/memory/constitution.md`. This file is a TEMPLATE containing placeholder tokens in square brackets (e.g. `[PROJECT_NAME]`, `[PRINCIPLE_1_NAME]`). Your job is to (a) collect/derive concrete values, (b) fill the template precisely, and (c) propagate any amendments across dependent artifacts. + +Follow this execution flow: + +1. Load the existing constitution template at `.specify/memory/constitution.md`. + - Identify every placeholder token of the form `[ALL_CAPS_IDENTIFIER]`. + **IMPORTANT**: The user might require less or more principles than the ones used in the template. If a number is specified, respect that - follow the general template. You will update the doc accordingly. + +2. Collect/derive values for placeholders: + - If user input (conversation) supplies a value, use it. + - Otherwise infer from existing repo context (README, docs, prior constitution versions if embedded). + - For governance dates: `RATIFICATION_DATE` is the original adoption date (if unknown ask or mark TODO), `LAST_AMENDED_DATE` is today if changes are made, otherwise keep previous. + - `CONSTITUTION_VERSION` must increment according to semantic versioning rules: + - MAJOR: Backward incompatible governance/principle removals or redefinitions. + - MINOR: New principle/section added or materially expanded guidance. + - PATCH: Clarifications, wording, typo fixes, non-semantic refinements. + - If version bump type ambiguous, propose reasoning before finalizing. + +3. Draft the updated constitution content: + - Replace every placeholder with concrete text (no bracketed tokens left except intentionally retained template slots that the project has chosen not to define yet - explicitly justify any left). + - Preserve heading hierarchy and comments can be removed once replaced unless they still add clarifying guidance. + - Ensure each Principle section: succinct name line, paragraph (or bullet list) capturing non-negotiable rules, explicit rationale if not obvious. + - Ensure Governance section lists amendment procedure, versioning policy, and compliance review expectations. + +4. Consistency propagation checklist (convert prior checklist into active validations): + - Read `.specify/templates/plan-template.md` and ensure any "Constitution Check" or rules align with updated principles. + - Read `.specify/templates/spec-template.md` for scope/requirements alignment - update if constitution adds/removes mandatory sections or constraints. + - Read `.specify/templates/tasks-template.md` and ensure task categorization reflects new or removed principle-driven task types (e.g., observability, versioning, testing discipline). + - Read each command file in `.specify/commands/*.md` (if present) to verify no outdated references. + - Read any runtime guidance docs (e.g., `README.md`, `docs/quickstart.md`, or agent-specific guidance files if present). Update references to principles changed. + +5. Produce a Sync Impact Report (prepend as an HTML comment at top of the constitution file after update): + - Version change: old -> new + - List of modified principles (old title -> new title if renamed) + - Added sections + - Removed sections + - Templates requiring updates (completed/pending) with file paths + - Follow-up TODOs if any placeholders intentionally deferred. + +6. Validation before final output: + - No remaining unexplained bracket tokens. + - Version line matches report. + - Dates ISO format YYYY-MM-DD. + - Principles are declarative, testable, and free of vague language ("should" -> replace with MUST/SHOULD rationale where appropriate). + +7. Write the completed constitution back to `.specify/memory/constitution.md` (overwrite). + +8. Output a final summary to the user with: + - New version and bump rationale. + - Any files flagged for manual follow-up. + - Suggested commit message (e.g., `docs: amend constitution to vX.Y.Z (principle additions + governance update)`). + +Formatting & Style Requirements: + +- Use Markdown headings exactly as in the template (do not demote/promote levels). +- Wrap long rationale lines to keep readability (<100 chars ideally) but do not hard enforce with awkward breaks. +- Keep a single blank line between sections. +- Avoid trailing whitespace. + +If the user supplies partial updates (e.g., only one principle revision), still perform validation and version decision steps. + +If critical info missing (e.g., ratification date truly unknown), insert `TODO(): explanation` and include in the Sync Impact Report under deferred items. + +Do not create a new template; always operate on the existing `.specify/memory/constitution.md` file. diff --git a/plugin/commands/implement.md b/plugin/commands/implement.md new file mode 100644 index 0000000000..40009e192e --- /dev/null +++ b/plugin/commands/implement.md @@ -0,0 +1,117 @@ +--- +description: Execute the implementation plan by processing and executing all tasks defined in tasks.md +scripts: + sh: .specify/scripts/bash/check-prerequisites.sh --json --require-tasks --include-tasks + ps: .specify/scripts/powershell/check-prerequisites.ps1 -Json -RequireTasks -IncludeTasks +--- + +## User Input + +```text +$ARGUMENTS +``` + +You **MUST** consider the user input before proceeding (if not empty). + +## Outline + +1. Run `{SCRIPT}` from repo root and parse FEATURE_DIR and AVAILABLE_DOCS list. All paths must be absolute. For single quotes in args like "I'm Groot", use escape syntax: e.g 'I'\''m Groot' (or double-quote if possible: "I'm Groot"). + +2. **Check checklists status** (if FEATURE_DIR/checklists/ exists): + - Scan all checklist files in the checklists/ directory + - For each checklist, count: + - Total items: All lines matching `- [ ]` or `- [X]` or `- [x]` + - Completed items: Lines matching `- [X]` or `- [x]` + - Incomplete items: Lines matching `- [ ]` + - Create a status table: + + ```text + | Checklist | Total | Completed | Incomplete | Status | + |-----------|-------|-----------|------------|--------| + | ux.md | 12 | 12 | 0 | PASS | + | test.md | 8 | 5 | 3 | FAIL | + | security.md | 6 | 6 | 0 | PASS | + ``` + + - **If any checklist is incomplete**: + - Display the table with incomplete item counts + - **STOP** and ask: "Some checklists are incomplete. Do you want to proceed with implementation anyway? (yes/no)" + - Wait for user response before continuing + - If user says "no" or "wait" or "stop", halt execution + - If user says "yes" or "proceed" or "continue", proceed to step 3 + + - **If all checklists are complete**: + - Display the table showing all checklists passed + - Automatically proceed to step 3 + +3. Load and analyze the implementation context: + - **REQUIRED**: Read tasks.md for the complete task list and execution plan + - **REQUIRED**: Read plan.md for tech stack, architecture, and file structure + - **IF EXISTS**: Read data-model.md for entities and relationships + - **IF EXISTS**: Read contracts/ for API specifications and test requirements + - **IF EXISTS**: Read research.md for technical decisions and constraints + - **IF EXISTS**: Read quickstart.md for integration scenarios + +4. **Project Setup Verification**: + - **REQUIRED**: Create/verify ignore files based on actual project setup: + + **Detection & Creation Logic**: + - Check if git repository exists -> create/verify .gitignore + - Check if Dockerfile* exists or Docker in plan.md -> create/verify .dockerignore + - Check if .eslintrc* exists -> create/verify .eslintignore + - Check if eslint.config.* exists -> ensure the config's `ignores` entries cover required patterns + - Check if .prettierrc* exists -> create/verify .prettierignore + - Check if .npmrc or package.json exists -> create/verify .npmignore (if publishing) + - Check if terraform files (*.tf) exist -> create/verify .terraformignore + - Check if .helmignore needed (helm charts present) -> create/verify .helmignore + + **If ignore file already exists**: Verify it contains essential patterns, append missing critical patterns only + **If ignore file missing**: Create with full pattern set for detected technology + + **Common Patterns by Technology** (from plan.md tech stack): + - **Node.js/JavaScript/TypeScript**: `node_modules/`, `dist/`, `build/`, `*.log`, `.env*` + - **Python**: `__pycache__/`, `*.pyc`, `.venv/`, `venv/`, `dist/`, `*.egg-info/` + - **Java**: `target/`, `*.class`, `*.jar`, `.gradle/`, `build/` + - **C#/.NET**: `bin/`, `obj/`, `*.user`, `*.suo`, `packages/` + - **Go**: `*.exe`, `*.test`, `vendor/`, `*.out` + - **Ruby**: `.bundle/`, `log/`, `tmp/`, `*.gem`, `vendor/bundle/` + - **PHP**: `vendor/`, `*.log`, `*.cache`, `*.env` + - **Rust**: `target/`, `debug/`, `release/`, `*.rs.bk`, `*.rlib` + - **Universal**: `.DS_Store`, `Thumbs.db`, `*.tmp`, `*.swp`, `.vscode/`, `.idea/` + +5. Parse tasks.md structure and extract: + - **Task phases**: Setup, Tests, Core, Integration, Polish + - **Task dependencies**: Sequential vs parallel execution rules + - **Task details**: ID, description, file paths, parallel markers [P] + - **Execution flow**: Order and dependency requirements + +6. Execute implementation following the task plan: + - **Phase-by-phase execution**: Complete each phase before moving to the next + - **Respect dependencies**: Run sequential tasks in order, parallel tasks [P] can run together + - **Follow TDD approach**: Execute test tasks before their corresponding implementation tasks + - **File-based coordination**: Tasks affecting the same files must run sequentially + - **Validation checkpoints**: Verify each phase completion before proceeding + +7. Implementation execution rules: + - **Setup first**: Initialize project structure, dependencies, configuration + - **Tests before code**: If you need to write tests for contracts, entities, and integration scenarios + - **Core development**: Implement models, services, CLI commands, endpoints + - **Integration work**: Database connections, middleware, logging, external services + - **Polish and validation**: Unit tests, performance optimization, documentation + +8. Progress tracking and error handling: + - Report progress after each completed task + - Halt execution if any non-parallel task fails + - For parallel tasks [P], continue with successful tasks, report failed ones + - Provide clear error messages with context for debugging + - Suggest next steps if implementation cannot proceed + - **IMPORTANT** For completed tasks, make sure to mark the task off as [X] in the tasks file. + +9. Completion validation: + - Verify all required tasks are completed + - Check that implemented features match the original specification + - Validate that tests pass and coverage meets requirements + - Confirm the implementation follows the technical plan + - Report final status with summary of completed work + +Note: This command assumes a complete task breakdown exists in tasks.md. If tasks are incomplete or missing, suggest running `/speckit:tasks` first to regenerate the task list. diff --git a/plugin/commands/plan.md b/plugin/commands/plan.md new file mode 100644 index 0000000000..50e53baab5 --- /dev/null +++ b/plugin/commands/plan.md @@ -0,0 +1,95 @@ +--- +description: Execute the implementation planning workflow using the plan template to generate design artifacts. +handoffs: + - label: Create Tasks + agent: speckit:tasks + prompt: Break the plan into tasks + send: true + - label: Create Checklist + agent: speckit:checklist + prompt: Create a checklist for the following domain... +scripts: + sh: .specify/scripts/bash/setup-plan.sh --json + ps: .specify/scripts/powershell/setup-plan.ps1 -Json +agent_scripts: + sh: .specify/scripts/bash/update-agent-context.sh __AGENT__ + ps: .specify/scripts/powershell/update-agent-context.ps1 -AgentType __AGENT__ +--- + +## User Input + +```text +$ARGUMENTS +``` + +You **MUST** consider the user input before proceeding (if not empty). + +## Outline + +1. **Setup**: Run `{SCRIPT}` from repo root and parse JSON for FEATURE_SPEC, IMPL_PLAN, SPECS_DIR, BRANCH. For single quotes in args like "I'm Groot", use escape syntax: e.g 'I'\''m Groot' (or double-quote if possible: "I'm Groot"). + +2. **Load context**: Read FEATURE_SPEC and `.specify/memory/constitution.md`. Load IMPL_PLAN template (already copied). + +3. **Execute plan workflow**: Follow the structure in IMPL_PLAN template to: + - Fill Technical Context (mark unknowns as "NEEDS CLARIFICATION") + - Fill Constitution Check section from constitution + - Evaluate gates (ERROR if violations unjustified) + - Phase 0: Generate research.md (resolve all NEEDS CLARIFICATION) + - Phase 1: Generate data-model.md, contracts/, quickstart.md + - Phase 1: Update agent context by running the agent script + - Re-evaluate Constitution Check post-design + +4. **Stop and report**: Command ends after Phase 2 planning. Report branch, IMPL_PLAN path, and generated artifacts. + +## Phases + +### Phase 0: Outline & Research + +1. **Extract unknowns from Technical Context** above: + - For each NEEDS CLARIFICATION -> research task + - For each dependency -> best practices task + - For each integration -> patterns task + +2. **Generate and dispatch research agents**: + + ```text + For each unknown in Technical Context: + Task: "Research {unknown} for {feature context}" + For each technology choice: + Task: "Find best practices for {tech} in {domain}" + ``` + +3. **Consolidate findings** in `research.md` using format: + - Decision: [what was chosen] + - Rationale: [why chosen] + - Alternatives considered: [what else evaluated] + +**Output**: research.md with all NEEDS CLARIFICATION resolved + +### Phase 1: Design & Contracts + +**Prerequisites:** `research.md` complete + +1. **Extract entities from feature spec** -> `data-model.md`: + - Entity name, fields, relationships + - Validation rules from requirements + - State transitions if applicable + +2. **Generate API contracts** from functional requirements: + - For each user action -> endpoint + - Use standard REST/GraphQL patterns + - Output OpenAPI/GraphQL schema to `/contracts/` + +3. **Agent context update**: + - Run `{AGENT_SCRIPT}` + - These scripts detect which AI agent is in use + - Update the appropriate agent-specific context file + - Add only new technology from current plan + - Preserve manual additions between markers + +**Output**: data-model.md, /contracts/*, quickstart.md, agent-specific file + +## Key rules + +- Use absolute paths +- ERROR on gate failures or unresolved clarifications diff --git a/plugin/commands/specify.md b/plugin/commands/specify.md new file mode 100644 index 0000000000..41e6f6db34 --- /dev/null +++ b/plugin/commands/specify.md @@ -0,0 +1,168 @@ +--- +description: Create or update the feature specification from a natural language feature description. +handoffs: + - label: Build Technical Plan + agent: speckit:plan + prompt: Create a plan for the spec. I am building with... + - label: Clarify Spec Requirements + agent: speckit:clarify + prompt: Clarify specification requirements + send: true +scripts: + sh: .specify/scripts/bash/create-new-feature.sh --json "{ARGS}" + ps: .specify/scripts/powershell/create-new-feature.ps1 -Json "{ARGS}" +--- + +## User Input + +```text +$ARGUMENTS +``` + +You **MUST** consider the user input before proceeding (if not empty). + +## Outline + +The text the user typed after `/speckit:specify` in the triggering message **is** the feature description. Assume you always have it available in this conversation even if `{ARGS}` appears literally below. Do not ask the user to repeat it unless they provided an empty command. + +Given that feature description, do this: + +1. **Generate a concise short name** (2-4 words) for the branch: + - Analyze the feature description and extract the most meaningful keywords + - Create a 2-4 word short name that captures the essence of the feature + - Use action-noun format when possible (e.g., "add-user-auth", "fix-payment-bug") + - Preserve technical terms and acronyms (OAuth2, API, JWT, etc.) + - Keep it concise but descriptive enough to understand the feature at a glance + - Examples: + - "I want to add user authentication" -> "user-auth" + - "Implement OAuth2 integration for the API" -> "oauth2-api-integration" + - "Create a dashboard for analytics" -> "analytics-dashboard" + - "Fix payment processing timeout bug" -> "fix-payment-timeout" + +2. **Check for existing branches before creating new one**: + + a. First, fetch all remote branches to ensure we have the latest information: + + ```bash + git fetch --all --prune + ``` + + b. Find the highest feature number across all sources for the short-name: + - Remote branches: `git ls-remote --heads origin | grep -E 'refs/heads/[0-9]+-$'` + - Local branches: `git branch | grep -E '^[* ]*[0-9]+-$'` + - Specs directories: Check for directories matching `.specify/specs/[0-9]+-` + + c. Determine the next available number: + - Extract all numbers from all three sources + - Find the highest number N + - Use N+1 for the new branch number + + d. Run the script `{SCRIPT}` with the calculated number and short-name: + - Pass `--number N+1` and `--short-name "your-short-name"` along with the feature description + - Bash example: `{SCRIPT} --json --number 5 --short-name "user-auth" "Add user authentication"` + - PowerShell example: `{SCRIPT} -Json -Number 5 -ShortName "user-auth" "Add user authentication"` + + **IMPORTANT**: + - Check all three sources (remote branches, local branches, specs directories) to find the highest number + - Only match branches/directories with the exact short-name pattern + - If no existing branches/directories found with this short-name, start with number 1 + - You must only ever run this script once per feature + - The JSON is provided in the terminal as output - always refer to it to get the actual content you're looking for + - The JSON output will contain BRANCH_NAME and SPEC_FILE paths + - For single quotes in args like "I'm Groot", use escape syntax: e.g 'I'\''m Groot' (or double-quote if possible: "I'm Groot") + +3. Load `.specify/templates/spec-template.md` to understand required sections. + +4. Follow this execution flow: + + 1. Parse user description from Input + If empty: ERROR "No feature description provided" + 2. Extract key concepts from description + Identify: actors, actions, data, constraints + 3. For unclear aspects: + - Make informed guesses based on context and industry standards + - Only mark with [NEEDS CLARIFICATION: specific question] if: + - The choice significantly impacts feature scope or user experience + - Multiple reasonable interpretations exist with different implications + - No reasonable default exists + - **LIMIT: Maximum 3 [NEEDS CLARIFICATION] markers total** + - Prioritize clarifications by impact: scope > security/privacy > user experience > technical details + 4. Fill User Scenarios & Testing section + If no clear user flow: ERROR "Cannot determine user scenarios" + 5. Generate Functional Requirements + Each requirement must be testable + Use reasonable defaults for unspecified details (document assumptions in Assumptions section) + 6. Define Success Criteria + Create measurable, technology-agnostic outcomes + Include both quantitative metrics (time, performance, volume) and qualitative measures (user satisfaction, task completion) + Each criterion must be verifiable without implementation details + 7. Identify Key Entities (if data involved) + 8. Return: SUCCESS (spec ready for planning) + +5. Write the specification to SPEC_FILE using the template structure, replacing placeholders with concrete details derived from the feature description (arguments) while preserving section order and headings. + +6. **Specification Quality Validation**: After writing the initial spec, validate it against quality criteria: + + a. **Create Spec Quality Checklist**: Generate a checklist file at `FEATURE_DIR/checklists/requirements.md` using the checklist template structure with validation items for content quality, requirement completeness, and feature readiness. + + b. **Run Validation Check**: Review the spec against each checklist item: + - For each item, determine if it passes or fails + - Document specific issues found (quote relevant spec sections) + + c. **Handle Validation Results**: + + - **If all items pass**: Mark checklist complete and proceed + + - **If items fail (excluding [NEEDS CLARIFICATION])**: + 1. List the failing items and specific issues + 2. Update the spec to address each issue + 3. Re-run validation until all items pass (max 3 iterations) + 4. If still failing after 3 iterations, document remaining issues in checklist notes and warn user + + - **If [NEEDS CLARIFICATION] markers remain**: + 1. Extract all [NEEDS CLARIFICATION: ...] markers from the spec + 2. **LIMIT CHECK**: If more than 3 markers exist, keep only the 3 most critical (by scope/security/UX impact) and make informed guesses for the rest + 3. For each clarification needed (max 3), present options to user in table format + 4. Wait for user to respond with their choices + 5. Update the spec by replacing each [NEEDS CLARIFICATION] marker with the user's selected or provided answer + 6. Re-run validation after all clarifications are resolved + + d. **Update Checklist**: After each validation iteration, update the checklist file with current pass/fail status + +7. Report completion with branch name, spec file path, checklist results, and readiness for the next phase (`/speckit:clarify` or `/speckit:plan`). + +**NOTE:** The script creates and checks out the new branch and initializes the spec file before writing. + +## General Guidelines + +## Quick Guidelines + +- Focus on **WHAT** users need and **WHY**. +- Avoid HOW to implement (no tech stack, APIs, code structure). +- Written for business stakeholders, not developers. +- DO NOT create any checklists that are embedded in the spec. That will be a separate command. + +### Section Requirements + +- **Mandatory sections**: Must be completed for every feature +- **Optional sections**: Include only when relevant to the feature +- When a section doesn't apply, remove it entirely (don't leave as "N/A") + +### For AI Generation + +When creating this spec from a user prompt: + +1. **Make informed guesses**: Use context, industry standards, and common patterns to fill gaps +2. **Document assumptions**: Record reasonable defaults in the Assumptions section +3. **Limit clarifications**: Maximum 3 [NEEDS CLARIFICATION] markers - use only for critical decisions +4. **Prioritize clarifications**: scope > security/privacy > user experience > technical details +5. **Think like a tester**: Every vague requirement should fail the "testable and unambiguous" checklist item + +### Success Criteria Guidelines + +Success criteria must be: + +1. **Measurable**: Include specific metrics (time, percentage, count, rate) +2. **Technology-agnostic**: No mention of frameworks, languages, databases, or tools +3. **User-focused**: Describe outcomes from user/business perspective, not system internals +4. **Verifiable**: Can be tested/validated without knowing implementation details diff --git a/plugin/commands/tasks.md b/plugin/commands/tasks.md new file mode 100644 index 0000000000..687aff4646 --- /dev/null +++ b/plugin/commands/tasks.md @@ -0,0 +1,140 @@ +--- +description: Generate an actionable, dependency-ordered tasks.md for the feature based on available design artifacts. +handoffs: + - label: Analyze For Consistency + agent: speckit:analyze + prompt: Run a project analysis for consistency + send: true + - label: Implement Project + agent: speckit:implement + prompt: Start the implementation in phases + send: true +scripts: + sh: .specify/scripts/bash/check-prerequisites.sh --json + ps: .specify/scripts/powershell/check-prerequisites.ps1 -Json +--- + +## User Input + +```text +$ARGUMENTS +``` + +You **MUST** consider the user input before proceeding (if not empty). + +## Outline + +1. **Setup**: Run `{SCRIPT}` from repo root and parse FEATURE_DIR and AVAILABLE_DOCS list. All paths must be absolute. For single quotes in args like "I'm Groot", use escape syntax: e.g 'I'\''m Groot' (or double-quote if possible: "I'm Groot"). + +2. **Load design documents**: Read from FEATURE_DIR: + - **Required**: plan.md (tech stack, libraries, structure), spec.md (user stories with priorities) + - **Optional**: data-model.md (entities), contracts/ (API endpoints), research.md (decisions), quickstart.md (test scenarios) + - Note: Not all projects have all documents. Generate tasks based on what's available. + +3. **Execute task generation workflow**: + - Load plan.md and extract tech stack, libraries, project structure + - Load spec.md and extract user stories with their priorities (P1, P2, P3, etc.) + - If data-model.md exists: Extract entities and map to user stories + - If contracts/ exists: Map endpoints to user stories + - If research.md exists: Extract decisions for setup tasks + - Generate tasks organized by user story (see Task Generation Rules below) + - Generate dependency graph showing user story completion order + - Create parallel execution examples per user story + - Validate task completeness (each user story has all needed tasks, independently testable) + +4. **Generate tasks.md**: Use `.specify/templates/tasks-template.md` as structure, fill with: + - Correct feature name from plan.md + - Phase 1: Setup tasks (project initialization) + - Phase 2: Foundational tasks (blocking prerequisites for all user stories) + - Phase 3+: One phase per user story (in priority order from spec.md) + - Each phase includes: story goal, independent test criteria, tests (if requested), implementation tasks + - Final Phase: Polish & cross-cutting concerns + - All tasks must follow the strict checklist format (see Task Generation Rules below) + - Clear file paths for each task + - Dependencies section showing story completion order + - Parallel execution examples per story + - Implementation strategy section (MVP first, incremental delivery) + +5. **Report**: Output path to generated tasks.md and summary: + - Total task count + - Task count per user story + - Parallel opportunities identified + - Independent test criteria for each story + - Suggested MVP scope (typically just User Story 1) + - Format validation: Confirm ALL tasks follow the checklist format (checkbox, ID, labels, file paths) + +Context for task generation: {ARGS} + +The tasks.md should be immediately executable - each task must be specific enough that an LLM can complete it without additional context. + +## Task Generation Rules + +**CRITICAL**: Tasks MUST be organized by user story to enable independent implementation and testing. + +**Tests are OPTIONAL**: Only generate test tasks if explicitly requested in the feature specification or if user requests TDD approach. + +### Checklist Format (REQUIRED) + +Every task MUST strictly follow this format: + +```text +- [ ] [TaskID] [P?] [Story?] Description with file path +``` + +**Format Components**: + +1. **Checkbox**: ALWAYS start with `- [ ]` (markdown checkbox) +2. **Task ID**: Sequential number (T001, T002, T003...) in execution order +3. **[P] marker**: Include ONLY if task is parallelizable (different files, no dependencies on incomplete tasks) +4. **[Story] label**: REQUIRED for user story phase tasks only + - Format: [US1], [US2], [US3], etc. (maps to user stories from spec.md) + - Setup phase: NO story label + - Foundational phase: NO story label + - User Story phases: MUST have story label + - Polish phase: NO story label +5. **Description**: Clear action with exact file path + +**Examples**: + +- CORRECT: `- [ ] T001 Create project structure per implementation plan` +- CORRECT: `- [ ] T005 [P] Implement authentication middleware in src/middleware/auth.py` +- CORRECT: `- [ ] T012 [P] [US1] Create User model in src/models/user.py` +- CORRECT: `- [ ] T014 [US1] Implement UserService in src/services/user_service.py` +- WRONG: `- [ ] Create User model` (missing ID and Story label) +- WRONG: `T001 [US1] Create model` (missing checkbox) +- WRONG: `- [ ] [US1] Create User model` (missing Task ID) +- WRONG: `- [ ] T001 [US1] Create model` (missing file path) + +### Task Organization + +1. **From User Stories (spec.md)** - PRIMARY ORGANIZATION: + - Each user story (P1, P2, P3...) gets its own phase + - Map all related components to their story: + - Models needed for that story + - Services needed for that story + - Endpoints/UI needed for that story + - If tests requested: Tests specific to that story + - Mark story dependencies (most stories should be independent) + +2. **From Contracts**: + - Map each contract/endpoint -> to the user story it serves + - If tests requested: Each contract -> contract test task [P] before implementation in that story's phase + +3. **From Data Model**: + - Map each entity to the user story(ies) that need it + - If entity serves multiple stories: Put in earliest story or Setup phase + - Relationships -> service layer tasks in appropriate story phase + +4. **From Setup/Infrastructure**: + - Shared infrastructure -> Setup phase (Phase 1) + - Foundational/blocking tasks -> Foundational phase (Phase 2) + - Story-specific setup -> within that story's phase + +### Phase Structure + +- **Phase 1**: Setup (project initialization) +- **Phase 2**: Foundational (blocking prerequisites - MUST complete before user stories) +- **Phase 3+**: User Stories in priority order (P1, P2, P3...) + - Within each story: Tests (if requested) -> Models -> Services -> Endpoints -> Integration + - Each phase should be a complete, independently testable increment +- **Final Phase**: Polish & Cross-Cutting Concerns diff --git a/plugin/commands/taskstoissues.md b/plugin/commands/taskstoissues.md new file mode 100644 index 0000000000..fe4db195a9 --- /dev/null +++ b/plugin/commands/taskstoissues.md @@ -0,0 +1,33 @@ +--- +description: Convert existing tasks into actionable, dependency-ordered GitHub issues for the feature based on available design artifacts. +tools: ['github/github-mcp-server/issue_write'] +scripts: + sh: .specify/scripts/bash/check-prerequisites.sh --json --require-tasks --include-tasks + ps: .specify/scripts/powershell/check-prerequisites.ps1 -Json -RequireTasks -IncludeTasks +--- + +## User Input + +```text +$ARGUMENTS +``` + +You **MUST** consider the user input before proceeding (if not empty). + +## Outline + +1. Run `{SCRIPT}` from repo root and parse FEATURE_DIR and AVAILABLE_DOCS list. All paths must be absolute. For single quotes in args like "I'm Groot", use escape syntax: e.g 'I'\''m Groot' (or double-quote if possible: "I'm Groot"). + +2. From the executed script, extract the path to **tasks**. + +3. Get the Git remote by running: + + ```bash + git config --get remote.origin.url + ``` + + > **CAUTION**: ONLY PROCEED TO NEXT STEPS IF THE REMOTE IS A GITHUB URL + +4. For each task in the list, use the GitHub MCP server to create a new issue in the repository that is representative of the Git remote. + + > **CAUTION**: UNDER NO CIRCUMSTANCES EVER CREATE ISSUES IN REPOSITORIES THAT DO NOT MATCH THE REMOTE URL diff --git a/plugin/skills/spec-driven-development/SKILL.md b/plugin/skills/spec-driven-development/SKILL.md new file mode 100644 index 0000000000..96854be9a7 --- /dev/null +++ b/plugin/skills/spec-driven-development/SKILL.md @@ -0,0 +1,158 @@ +--- +name: spec-driven-development +description: | + Spec-Driven Development (SDD) methodology guidance. Use this skill when: + - User mentions "spec-kit", "speckit", "SDD", "spec-driven", or "specification-first" + - User asks about writing specifications, feature planning, or systematic development + - User wants to implement a feature following a structured workflow + - User needs help with PRDs, technical plans, or task breakdowns + - Project has a `.specify/` directory structure +version: 1.0.0 +--- + +# Spec-Driven Development (SDD) + +Spec-Driven Development is a methodology where **specifications become the primary source of truth** that directly generates working implementations. Code serves specifications, not the other way around. + +## Core Philosophy + +Traditional development: Specs -> Code (specs get outdated) +SDD approach: Specs -> Plans -> Tasks -> Code (specs remain source of truth) + +## The SDD Workflow + +The spec-kit plugin provides 8 commands that form a complete development workflow: + +### 1. `/speckit:constitution` - Establish Project Principles +- Creates/updates `.specify/memory/constitution.md` +- Defines non-negotiable project principles +- All downstream artifacts must align with the constitution + +### 2. `/speckit:specify` - Generate Feature Specification +- Takes a natural language feature description +- Creates a structured PRD in `.specify/specs/[feature]/spec.md` +- Focuses on WHAT and WHY (not HOW) +- Generates user stories, requirements, and success criteria +- Creates feature branch automatically + +### 3. `/speckit:clarify` - Resolve Ambiguities (Optional) +- Asks up to 5 targeted clarification questions +- Reduces ambiguity before planning +- Records answers directly in the spec + +### 4. `/speckit:plan` - Create Implementation Plan +- Generates technical plan with architecture decisions +- Creates supporting artifacts: + - `research.md` - Technology decisions + - `data-model.md` - Entity definitions + - `contracts/` - API specifications + - `quickstart.md` - Validation scenarios + +### 5. `/speckit:tasks` - Break Down Into Tasks +- Generates actionable, dependency-ordered tasks +- Organizes by user story for independent implementation +- Uses strict checklist format with IDs and file paths +- Marks parallelizable tasks with [P] + +### 6. `/speckit:analyze` - Consistency Analysis (Optional) +- Cross-artifact consistency check +- Validates spec/plan/tasks alignment +- Identifies gaps, duplications, ambiguities +- Read-only - produces analysis report + +### 7. `/speckit:checklist` - Generate Quality Checklists +- Creates "unit tests for requirements" +- Validates requirement quality (not implementation) +- Categories: completeness, clarity, consistency, coverage + +### 8. `/speckit:implement` - Execute Implementation +- Processes tasks in order +- Validates checklists before starting +- Marks tasks complete as they're done +- Creates necessary ignore files + +### 9. `/speckit:taskstoissues` - Create GitHub Issues +- Converts tasks to GitHub issues +- Respects repository remote URL + +## Project Structure + +Projects using spec-kit have this structure: + +``` +project/ +├── .specify/ +│ ├── memory/ +│ │ └── constitution.md # Project principles +│ ├── scripts/ +│ │ ├── bash/ # Bash helper scripts +│ │ └── powershell/ # PowerShell helper scripts +│ ├── templates/ +│ │ ├── spec-template.md +│ │ ├── plan-template.md +│ │ ├── tasks-template.md +│ │ └── checklist-template.md +│ └── specs/ +│ └── 001-feature-name/ +│ ├── spec.md # Feature specification +│ ├── plan.md # Implementation plan +│ ├── tasks.md # Task breakdown +│ ├── data-model.md # Data schemas +│ ├── research.md # Tech decisions +│ ├── contracts/ # API specs +│ └── checklists/ # Quality checklists +└── CLAUDE.md # Agent guidance +``` + +## Key Principles + +### Specifications Focus on WHAT, Not HOW +- No technology/framework mentions in specs +- Written for business stakeholders +- Success criteria are measurable and technology-agnostic + +### Tasks Are Immediately Executable +- Each task has an ID (T001, T002...) +- Includes exact file paths +- Parallelizable tasks marked with [P] +- Story labels link to user stories [US1], [US2] + +### Checklists Validate Requirements, Not Implementation +- Question format: "Is X specified?" not "Does X work?" +- Focus on completeness, clarity, consistency +- Include traceability to spec sections + +### Constitution is Non-Negotiable +- All artifacts must align with constitution principles +- Constitution violations are always CRITICAL +- Changes require explicit constitution update + +## Detecting SDD Projects + +A project uses spec-kit if it has: +- `.specify/` directory at the project root +- `.specify/memory/constitution.md` +- `.specify/templates/` with spec/plan/task templates +- Feature specs in `.specify/specs/[NNN-feature-name]/` + +## Recommended Workflow + +For new features: +1. `/speckit:constitution` (once per project) +2. `/speckit:specify ` +3. `/speckit:clarify` (optional but recommended) +4. `/speckit:plan ` +5. `/speckit:tasks` +6. `/speckit:analyze` (optional quality check) +7. `/speckit:checklist ` (as needed) +8. `/speckit:implement` + +## Best Practices + +1. **Keep specs focused**: One feature per specification +2. **Be specific in success criteria**: Use measurable outcomes +3. **Document assumptions**: Record reasonable defaults +4. **Limit clarifications**: Max 3 NEEDS CLARIFICATION markers +5. **Organize by user story**: Tasks should enable independent testing +6. **Mark parallel tasks**: Use [P] for tasks that can run concurrently +7. **Validate before implementing**: Run analyze and checklists first