1 pointby dnielsen10314 hours ago1 comment
  • dnielsen10314 hours ago
    We open-sourced Speclint today. It's a linter for the specs you feed to AI coding agents.

    The premise is simple: AI agents build exactly what the spec says, not what you meant. "Make the dashboard faster" produces different code every time. "Dashboard P95 load time under 800ms, measured via Lighthouse, no regressions on existing tests" produces the right code on the first try. The model isn't the bottleneck — the spec is.

    Speclint scores specs across 6 dimensions:

    Measurable outcome (20 pts) — does the spec define success you can observe? Testable criteria (25 pts) — are there acceptance criteria with action verbs? Constraints (20 pts) — scope limits, tech assumptions, tags? No vague verbs (20 pts) — "improve X" fails; "reduce X from Y to Z" passes Definition of done (informational) — specific states/thresholds in ACs Verification steps (15 pts) — how will you prove it works?

    Try it now:

        npx @speclint/cli lint "Improve dashboard performance"
    The whole scoring engine is 94 lines of TypeScript. We're not pretending this is rocket science. The value is making spec quality a measurable, enforceable thing in your CI pipeline.

    What's open source vs paid: OSS (MIT) gives you the scoring engine, CLI, all 6 dimensions, run locally, unlimited. Cloud free is 5 lints/day via hosted API. Cloud paid ($29-79/mo) adds unlimited API, batch scoring, codebase-aware scoring, and team dashboard.

    We built this because we run Claude Code and Cursor agents on our own projects and kept getting garbage back from vague specs. The scoring rubric is opinionated — looking for feedback on the dimensions and whether you'd add this to your CI.

    Cloud: https://speclint.ai/