74 pointsby mooreds7 days ago10 comments
  • shubhamjain21 hours ago
    > Spec-Driven Development changes this: specifications become executable, directly generating working implementations rather than just guiding them.

    Reminds me of TDD bandwagon which was all the rage when I started programming. It took years to slowly die out and people realized how overhyped it really was. Nothing against AI, I love it as a tool, but this "you-don't-need-code" approach shows similar signs. Quick wins at first, lots of hype because of those wins, and then reaching a point where doing even tiny changes becomes absurdly difficult.

    You need code. You will need it for a long time.

    • MoreQARespect17 hours ago
      >Reminds me of TDD bandwagon which was all the rage when I started programming. It took years to slowly die out and people realized how overhyped it really was.

      It never really went away. The problem is that there is a dearth of teaching materials telling people how to do it properly:

      * E2E test first

      * Write high level integration tests which match requirements by default

      * Only start writing lower level unit tests when a clear and stable API emerges.

      and most people when they tried it didn't do that. They mostly did the exact opposite:

      * Write low level unit tests which match the code by default.

      * Never write a higher level tests (some people don't even think it's possible to write an integration or e2e test with TDD because "it has to be a unit test").

      • BobbyTables216 hours ago
        Not even sure the problem is just education.

        For something complex, it’s kinda hard to write and debug high level tests when all the lower level functionality is missing and just stubbed out.

        We don’t expect people to write working software that cannot be executed first, yet we expect people to write (and complete) all tests before the actual implementation.

        Sure for trivial things, it’s definitely doable. But then extensive tests wouldn’t be needed for such either!

        Imagine someone developing an application where the standard C library was replaced with a stub implementation… That wouldn’t work… Yet TDD says one should be able to do pretty much the same thing…

        • MoreQARespect3 hours ago
          >Imagine someone developing an application where the standard C library was replaced with a stub implementation… That wouldn’t work… Yet TDD says one should be able to do pretty much the same thing…

          No it doesnt say you should do that. TDD says red green refactor that is all. You can and should do that with an e2e test or integration test and a real libc to do otherwise would be ass backwards.

          Yours is the exact unit testing dogma that I was referring to that people have misunderstood as being part of TDD due to bad education.

      • nchmy16 hours ago
        Would you be able to share any links that expand upon your recommended approach? It makes complete sense to me as a self-taught dev, and is what I've always done (most recently, an e2e test of a realtime cdc etl pipeline, checking for/logging and fixing various things along the way until I was getting the right final output). I rarely write unit tests. It would be good to read something more formal in support of what I've naturally gravitated towards
        • MoreQARespect3 hours ago
          no, but i have a feeling i should write one because i keep running into this misunderstanding.

          it makes it really hard to recommend TDD when people believe they already know what it is but are doing it ass backwards.

      • siva715 hours ago
        TDD failed because it was sold as a method on how to write better tests yet in reality it was a very challenging skill to learn on how to write software that involved a fundamental change in how you approached requirements engineering, software development, iterations and testing. Even with a skilled team the cost to adapt TDD would be very high for an uncertain outcome. So people tried shortcuts like you described and you can't blame them. The whole movement was flawed and unrealistic in its expectations and communications.
    • discreteevent21 hours ago
      There was a really good article on this here a few days ago that didn't get much traction. It was about how programming is a learning feedback loop and because of that there are good and bad ways to use LLMs:

      "The readymade components we use are essentially compressed bundles of context—countless design decisions, trade-offs, and lessons are hidden within them. By using them, we get the functionality without the learning, leaving us with zero internalized knowledge of the complex machinery we've just adopted. This can quickly lead to sharp increase in the time spent to get work done and sharp decrease in productivity."

      https://martinfowler.com/articles/llm-learning-loop.html

      • dennisy16 hours ago
        This is a great read and one which for me personally really summarises my feeling on developing with LLMs.
    • CPLX17 hours ago
      What's wrong with TDD? This is a serious question, not starting an argument.
      • jmann9999916 hours ago
        My issue with it has always been that I just don't think the way TDD requires.

        I think in terms of building features. TDD generally requires thinking in terms of proving behavior. I still can't wrap my head around first writing a test that fails and then writing minimal code to make it pass (I know I am simplifying it).

        Different strokes for different folks. I'm sure it works great for some people but not for me.

        • moi23883 hours ago
          Well, all it does is basically enforce you code with testing (abstraction) in mind, and only focus on what you need.

          So you build feature A. Great.

          Why not write a test to instantiate the feature? It will fail, because you haven’t built it yet. Now go build it.

          I assume you build the interface first already, before every little detail of the methods?

          Also, you really don’t have to stick to the TDD principles that much. It’s basically to ensure: 1. You have a test which can fail (it actually tests something) 2. You actually have a test for your unit 3. You actually only code what you’re supposed to

          This is great for juniors, but as you have more experience, these individual steps lose value, but the principles remain I think.

          Also, I would never write tests for every method (as TDD might have you believe), because that’s not my “unit” in unit testing.

      • BobbyTables216 hours ago
        Try it and you’ll quickly see…
        • CPLX15 hours ago
          I have. I don’t hate it, though I don’t think it’s a magic bullet either.
    • lloydatkinson20 hours ago
      Well done on conflating BDD and TDD then, I suppose.
  • insin21 hours ago
    This isn't just me not reading your comprehensive guide [1]. It's me recognising you couldn't even be bothered to write it yourself.

    [1] https://github.com/github/spec-kit/blob/main/spec-driven.md

    • throwaway2906 hours ago
      It's a write only world. They want you to ask copilot for summary. The spice must flow
  • apex_sloth17 hours ago
    I played with this extensively on hobby projects (music visualizer Wayland widget for example) and I like the idea. I like coming up with cool stuff and solutions. The problem is I'm just not disciplined enough, it makes me lazy. The longer I uses it, the less code I read myself and just fire quick /implement loops and go do something else, thinking it should be straight forward. As other have pointed out, AI still needs a lot of hand holding and there are a lot of necessary decisions to make that one usually only realizes while actually building it.
  • rsyring19 hours ago
    High level design concerns: https://github.com/github/spec-kit/issues/1092

    Worth reading before jumping in.

    • hrimfaxi16 hours ago
      Which points did you find particularly noteworthy? With "concerns" like

      > We need to avoid at all costs the "great specs - no MVP" problem.

      this issue doesn't seem useful or helpful at all.

  • 42point218 hours ago
    We started experimenting with this on a large-ish feature, with several repos involved. Off to a good start. The constitution that’s created as the first step is valuable in its own right. Something that can be used for onboarding both engineers and LLMs. Version 1 generated by specify was already quite good and we iterated from there. We had previously created a Claude.md that took the whole codebase into account, which I think helped.

    I’m perhaps less sold on the idea of the spec being the source of truth — would have to do some design iterations and see if that holds up. I do like that it imposes some structure/rigor on the design process.

  • trjordana day ago
    I don't think we ever get away from the code being the source of truth. There has to be one source of truth.

    If you want to go all in on specs, you must fully commit to allowing the AI to regenerate the codebase from scratch at any point. I'm an AI optimist, but this is a laughable stance with current tools.

    That said, the idea of operating on the codebase as a mutable, complex entity, at arms length, makes a TON of sense to me. I love touching and feeling the code, but as soon as there's 1) schedule pressure and 2) a company's worth of code, operating at a systems level of understanding just makes way more sense. Defining what you want done, using a mix of user-centric intent and architecture constraints, seems like a super high-leverage way to work.

    The feedback mechanisms are still pretty tough, because you need to understand what the AI is implicitly doing as it works through your spec. There are decisions you didn't realize you needed to make, until you get there.

    We're thinking a lot about this at https://tern.sh, and I'm currently excited about the idea of throwing an agentic loop around the implementation itself. Adversarially have an AI read through that huge implementation log and surface where it's struggling. It's a model that gives real leverage, especially over the "watch Claude flail" mode that's common in bigger projects/codebases.

    • spot501018 hours ago
      The reason code can serve as the source of truth is that it’s precise enough to describe intent, since programming languages are well-specified. Compilers have freedom in how they translate code into assembly and two different compilers ( or even different optimization flags) will produce distinct binaries. Yet all of them preserve the same intent and observable behaviour that the programmer cares about. Runtime performance or instruction order may vary, but the semantics remain consistent.

      For spec driven development to truly work, perhaps what’s needed is a higher level spec language that can express user intent precisely, at the level of abstraction where the human understanding lives, while ensuring that the lower level implementation is generated correctly.

      A programmer could then use LLMs to translate plain English into this “spec language,” which would then become the real source of truth.

      • DeathArrow17 hours ago
        What about pseudocode? It is high level enough.
        • spot501017 hours ago
          Right, but it needs to be formalized.
    • dennisy20 hours ago
      Tern looks very interesting.

      On your homepage there is a mention that Tern “writes its own tools”, could you give an example on how this works?

      • trjordan13 hours ago
        If you're thinking about, e.g. upgrading to Django 5, there's a bunch of changes that are sort of code-mod-shaped. It's possible that there's not a codemod for it it that works for you.

        Tern can write that tool for you, then use it. It gives you more control in certain cases than simply asking the AI to do something that might appear hundreds of times in your code.

    • DeathArrow17 hours ago
      >Adversarially have an AI read through that huge implementation log and surface where it's struggling.

      That's a good idea, have a specification, divide into chunks, have an army of agents, each of them implementing a chunk, have an agent identify weak points, incomplete implementations, bugs and have an army of agents fixing issues.

    • Marazan21 hours ago
      > There are decisions you didn't realize you needed to make, until you get there.

      Is the key insight and biggest stumbling block for me at the moment.

      At the moment (encourage by my company) I'm experimenting with as hands off as possible Agent usage for coding. And it is _unbelievably_ frustrating to see the Agent get 99% of the code right in the first pass only to misunderstand why a test is now failing and then completely mangle both it's own code and the existing tests as it tries to "fix" the "problem". And if I'd just given it a better spec to start with it probably wouldn't have started producing garbage.

      But I didn't know that before working with the code! So to develop a good spec I either have to have the agent stopping all the time so I can intervene or dive into the code myself to begin with and at that point I may as well write the code anyway as writing the code is not the slow bit.

      • trjordan21 hours ago
        For sure. One of our first posts was called "You Have To Decide" -- https://tern.sh/blog/you-have-to-decide/

        And my process now (and what we're baking into the product) is:

        - Make a prompt

        - Run it in a loop over N files. Full agentic toolkit, but don't be wasteful (no "full typecheck, run the test suite" on every file).

        - Have an agent check the output. Look for repeated exploration, look for failures. Those imply confusion.

        - Iterate the prompt to remove the confusion.

        First pass on the current project (a Vue 3 migration) went from 45 min of agentic time on 5 files to 10 min on 50 files, and the latter passed tests/typecheck/my own scrolling through it.

  • sebast_bakea day ago
    Is it good?
    • lngra day ago
      Yes, I love it. I have used it a while with Claude Code, Codex CLI and Windsurf. It's awesome with Claude Code. Codex CLI produces just garbage. Windsurf results vary, even when I use it with Claude models. I now use it with Windsurf for the specify and plan modes, and Claude for the implementation.
  • DeathArrow18 hours ago
    I kind of do something similar, without using the spec-kit. I use an LLM to define specifications, task lists and to generate prompts to be fed into an agent. I also use an llm to generate .cursorules.
  • satisfice19 hours ago
    Why do they say this approach flips the script? People who promote executable specs are just swapping the word “code” for “spec” without changing anything meaningful.

    It’s higher level programming, perhaps, but it’s still programming.

  • isodev21 hours ago
    Can I use it without the uv tool? I’d rather my open source projects remain open as in libre.
    • JimDabell21 hours ago
      uv is Apache and MIT-licensed. It’s as “open as in libre” as it gets.
      • isodev13 hours ago
        But made by a corp in the “Extend” phase of embrace-extend-extinguish. No thanks. Fanboys love a new tool but let’s for once look ahead a bit before jumping in.