TL;DR: VR-SDD solved the "green specs, wrong product" problem by placing persona outcomes above the spec layer. Intent-Driven Development (IDD) solves the problem that comes next: once you've shipped the right feature, can you prove it still satisfies the original intent six months later? IDD makes intent schema-validated, traceable, and evidence-producing — turning the chain from persona outcome to shipped code into something a CI gate can actually enforce.
Where VR-SDD Left Us
If you read the VR-SDD post, you know the problem it solved: specs and tests can all be green while the product is still off. The root cause was that SDD tools were treating the spec as the top of the stack, when really persona outcomes and requirements belong above it.
VR-SDD fixed that by defining a six-step loop: persona outcomes → refined features → decomposed requirements → OpenSpec change proposals → contracts and tests → code → docs. A stack with "why" at the top, not just "what."
That was a genuine improvement. Teams using it have fewer "we built the wrong thing" surprises. The backlog stays connected to the code. AI-assisted development has a grounded source of truth to work from.
But there's a problem that VR-SDD doesn't fully address — and it's a subtle one.
How do you know the intent is still being satisfied after the code has been shipped, modified, and extended by six more sprints and three different AI agents?
Persona outcomes defined in prose are useful for planning. They are not enforceable by a CI gate. They don't generate evidence. They can drift from the code that supposedly implements them just as easily as markdown docs can drift from a spec. You've moved the definition of "done" upstream, which is good. But you haven't made it machine-checkable, which means you still can't close the loop.
That's what Intent-Driven Development is about.
The Principle: OpenSpec Owns the Intent. SpecFact Owns the Evidence.
Before going into what IDD involves, it helps to understand its core architectural principle, because everything else follows from it:
OpenSpec owns the intent. SpecFact owns the evidence.
This is not just a nice line. It's a division of responsibility that keeps the intent format tool-agnostic (any team using OpenSpec, Spec-Kit, or another SDD tool can participate) while giving SpecFact a clear, non-duplicative role: validate that intent was satisfied, generate cryptographically-structured proof that it was, and block CI if it wasn't.
The intent authoring layer is deliberately left to OpenSpec and other SDD tools. SpecFact doesn't want to own the format for how teams write business outcomes. It wants to own the question: did the shipped code actually satisfy them, and where's the evidence?
That's a different problem, and it's the harder one.
What IDD Actually Adds
Intent-Driven Development formalises three things that VR-SDD treats as prose.
1. Schema-Validated Intent
Instead of persona outcomes as free-text backlog items, IDD defines them as structured, schema-validated artifacts. A BusinessOutcome has an ID, a measurable success criterion, a linked persona, and a validation method. A BusinessRule follows Given/When/Then format — machine-parseable, linkable to executable tests. An ArchitecturalConstraint is a typed record that can bind to fitness functions running in CI.
Why does schema matter? Because prose drifts. A schema-validated artifact can be checked. You can write a test that says "every spec change must reference at least one BusinessOutcome." You cannot write that test against a sentence in a Confluence page.
# Intent Trace section in an OpenSpec change proposal
intent_trace:
business_outcomes:
- id: "BO-007"
description: "Reduce backlog refinement time by 40%"
persona: "Engineering Lead"
success_metric: "Refinement ceremony duration"
business_rules:
- id: "BR-012"
outcome_ref: "BO-007"
given: "A sprint backlog contains unrefined items"
when: "The lead runs specfact backlog ceremony refinement"
then: "All items pass Definition of Ready within one session"
requirement_refs:
- "REQ-089" 2. End-to-End Traceability
VR-SDD establishes that specs should be derived from requirements. IDD enforces this mechanically. The traceability invariant requires that every shipped feature traces backwards to at least one BusinessOutcome and forwards through BusinessRules, ArchitecturalConstraints, specs, contracts, code, and tests. A break anywhere in that chain blocks the publish gate.
This sounds like overhead until you've experienced the alternative: a codebase six months old where nobody can tell you which feature implements which requirement, or why a particular architectural decision was made. IDD doesn't just document the chain — it validates that the chain exists and is unbroken, every time CI runs.
Persona Outcomes → Business Rules (G/W/T) → Architectural Constraints
↓ ↓ ↓
[Requirements] [Requirements + [Architecture]
Architecture]
↓ ↓ ↓
OpenSpec Specs → Contracts + Tests → Code (AI-generated)
↓ ↓ ↓
Evidence Collection → CI Gate → Audit Trail 3. Machine-Readable Evidence
This is the piece nobody else in the SDD space has built yet. IDD introduces evidence JSON envelopes — structured records that capture validation timestamp, tool version, verdict (pass/fail/error), and a hash of the artifact being validated. These aren't logs. They're portable, schema-validated, CI-integratable proof artifacts.
{
"evidence_version": "1.0.0",
"timestamp": "2026-03-05T14:30:00Z",
"artifact": {
"type": "BusinessRule",
"id": "BR-012",
"hash": "sha256:abc123..."
},
"validation": {
"verdict": "pass",
"checks": [
{"name": "schema_conformance", "result": "pass"},
{"name": "gwt_parseable", "result": "pass"},
{"name": "outcome_linked", "result": "pass", "outcome_id": "BO-007"},
{"name": "test_bound", "result": "pass", "test_id": "TEST-091"}
]
},
"trace": {
"upstream": ["BO-007"],
"downstream": ["SPEC-034", "CONTRACT-018", "TEST-091"]
}
} Why is this novel? NIST's OSCAL standard handles machine-readable evidence for security compliance (SOC 2, FedRAMP). Nothing equivalent exists for functional requirements traceability. IDD fills that gap — and it matters increasingly for teams where 95%+ of code is AI-generated and auditors want to know how any given line connects to a business requirement.
The Nine-Stage Modern Agile Cycle
IDD maps onto a nine-stage cycle that extends VR-SDD's six-step loop. The first three stages are new; the rest are the VR-SDD stages with intent traceability wired through them.
| Stage | What happens | SpecFact module |
|---|---|---|
| 1. Persona outcomes | BusinessOutcome schema capture | Requirements Module |
| 2. Requirements decomposition | Business Rules (G/W/T) + ArchitecturalConstraints | Requirements + Architecture Modules |
| 3. Architecture derivation | ADRs with constraint linkage, fitness functions | Architecture Module |
| 4. Spec generation | OpenSpec/Spec-Kit proposals with Intent Trace | Bridge adapters |
| 5. Contract enforcement | Runtime contracts + symbolic execution | Core enforce |
| 6. Code generation | AI IDE with intent context, prompt-validate-feedback loop | Agent Skills Module |
| 7. Evidence collection | JSON envelopes for every validation result | Governance Module |
| 8. CI gate | Deterministic BLOCK/ALLOW with evidence references | Core enforce + Governance |
| 9. Documentation + audit | Living docs with full traceability chain | Core export + Governance |
The first three stages are where most of the new tooling work lives. Stages 4–9 extend and wire together capabilities SpecFact already has.
Why the AI IDE Context Makes This Urgent
Here's the uncomfortable reality of 2026: most of the code being written is AI-generated, and AI agents are — in SpecFact's terminology — uncontrolled. You cannot guarantee that Cursor, Claude Code, Copilot, or any other agent will honour architectural constraints it wasn't explicitly given. You cannot guarantee it will stay within the intent of the requirement it was asked to implement. It will produce plausible-looking code that may subtly violate both.
The IDD answer to this is the prompt-validate-feedback loop:
- Prompt phase: SpecFact generates structured prompts containing the BusinessOutcome, BusinessRules, and ArchitecturalConstraints as machine-readable context — not prose, not a Jira ticket title.
- Validate phase: After the agent produces output, SpecFact validates it against schemas, contracts, and traceability requirements. Deterministically. No AI judgment involved.
- Feedback phase: Validation gaps become input for
specfact generate fix-prompt, and the loop repeats.
This is robust against agent unpredictability precisely because validation is deterministic. The loop handles two failure modes simultaneously: "wrong implementation" (contracts catch regressions) and "wrong feature" (traceability validation catches intent drift). No competing SDD tool addresses both.
The Competitive Landscape: A Crowded Space With a Clear Gap
The SDD ecosystem has exploded. GitHub Spec-Kit, Amazon Kiro, Tessl, BMAD, Augment Code Intent — all serious tools, all doing interesting things. They share one characteristic: they focus on the generation pipeline (intent → code). None of them systematically validate that code continues to match intent post-implementation.
- Kiro (Amazon) locks you into its VS Code fork and generates no compliance evidence
- Tessl operates at file-level abstraction with no business outcome tracking or evidence generation
- Spec-Kit excels at greenfield with flat Markdown but has no schema validation or runtime enforcement
- Augment Code offers "Living Specs" but is closed-source and IDE-bound
- BMAD simulates a full agile team with 21 agents — broad but produces no machine-readable evidence
The critical market gap is three-fold: post-implementation validation (nobody runs continuous intent checks against shipped code), structured governance evidence (no developer tool generates machine-readable compliance artifacts as a first-class output), and tool-agnostic intent validation (most tools tie you to a specific IDE or agent).
SpecFact's CLI-first, offline-capable, agent-agnostic position makes it "Dredd for requirements" — the way Dredd validates API implementations against OpenAPI specs, SpecFact validates feature implementations against intent specs, regardless of what generated the code.
What Ships Next
IDD is being built as four marketplace modules, in dependency order:
| Phase | Module | Key capabilities |
|---|---|---|
| 1 | Requirements Module | BusinessOutcome, BusinessRule (G/W/T), RequirementTrace schemas; specfact requirements capture/validate/trace |
| 2 | Architecture Module | ArchitecturalConstraint schema, ADR management, fitness function bindings, derive/validate-coverage/trace |
| 3 | Governance Module | Evidence collection and bundling, full CI gate integration, --intent preset for specfact enforce stage |
| 4 | Agent Skills Module | Intent-capture, requirements-decompose, architecture-derive, trace-validate slash commands for all supported AI agents |
Alongside this, OpenSpec is getting four format changes: a mandatory Intent Trace section in proposals, requirement reference linking in tasks, evidence linking in archives, and JSON Schema validation for Intent Trace sections.
The Bigger Picture
VR-SDD was the insight that specs need something above them. IDD is the engineering that makes "something above them" enforceable.
SDD → specs are the source of truth
VR-SDD → persona outcomes sit above specs
IDD → outcomes are schema-validated, traced, and evidence-producing Each step doesn't replace the previous one. It makes the previous one honest. VR-SDD didn't replace SDD — it grounded it. IDD doesn't replace VR-SDD — it makes the traceability chain machine-checkable all the way from "what does the user actually need" to "here's cryptographic proof we delivered it."
In a world where most code is AI-generated and the velocity of change is only increasing, that chain matters. Not as a compliance exercise. As the only reliable answer to "did we build the right thing, and can we still prove it was the right thing six months from now?"
That's what Intent-Driven Development is about.