2 pointsby sgharlowa month ago2 comments
  • sgharlowa month ago
    Hi HN! I built this after watching AI assistants confidently ship mocked data, break working contracts, and create the illusion of progress for hours.

    The core insight: AI sessions fail not from bad models, but from missing structure. The DRS (Deployability Rating Score) gives you a single number (0-100) that answers "can I actually ship this?"

    Key components:

    * Contract freezing (no silent interface changes)

    * Mock expiration (30-minute max)

    * Scope limits (5 files, 200 LOC)

    * Time-based convergence gatesIt's MIT licensed.

    Curious what patterns you've seen in your AI coding sessions.

  • ktg0215a month ago
    This addresses a real problem. I've seen too many impressive AI demos that fell apart when trying to ship to production. The "30-minute mock timeout" is a clever forcing function - it's easy to let mocks linger forever.

    The DRS scoring could be useful for teams struggling to answer "is this ready to deploy?" Currently trying this out with my own Claude Code workflow.

    • sgharlowa month ago
      great--let me know how it goes and if you have any suggested improvements.