146 pointsby anurag3 hours ago18 comments
  • mjr00an hour ago
    > Break down sessions into separate clear, actionable tasks. Don't try to "draw the owl" in one mega session.

    This is the key one I think. At one extreme you can tell an agent "write a for loop that iterates over the variable `numbers` and computes the sum" and they'll do this successfully, but the scope is so small there's not much point in using an LLM. On the other extreme you can tell an agent "make me an app that's Facebook for dogs" and it'll make so many assumptions about the architecture, code and product that there's no chance it produces anything useful beyond a cool prototype to show mom and dad.

    A lot of successful LLM adoption for code is finding this sweet spot. Overly specific instructions don't make you feel productive, and overly broad instructions you end up redoing too much of the work.

    • sho_hnan hour ago
      This is actually an aspect of using AI tools I really enjoy: Forming an educated intuition about what the tool is good at, and tastefully framing and scoping the tasks I give it to get better results.

      It cognitively feels very similar to other classic programming activities, like modularization at any level from architecture to code units/functions, thoughtfully choosing how to lay out and chunk things. It's always been one of the things that make programming pleasurable for me, and some of that feeling returns when slicing up tasks for agents.

      • allenu27 minutes ago
        I agree that framing and scoping tasks is becoming a real joy. The great thing about this strategy is there's a point at which you can scope something small enough that it's hard for the AI to get it wrong and it's easy enough for you as a human to comprehend what it's done and verify that it's correct.

        I'm starting to think of projects now as a tree structure where the overall architecture of the system is the main trunk and from there you have the sub-modules, and eventually you get to implementations of functions and classes. The goal of the human in working with the coding agent is to have full editorial control of the main trunk and main sub-modules and delegate as much of the smaller branches as possible.

        Sometimes you're still working out the higher-level architecture, too, and you can use the agent to prototype the smaller bits and pieces which will inform the decisions you make about how the higher-level stuff should operate.

    • iamacyborg17 minutes ago
      > On the other extreme you can tell an agent "make me an app that's Facebook for dogs" and it'll make so many assumptions about the architecture, code and product that there's no chance it produces anything useful beyond a cool prototype to show mom and dad.

      Amusingly, this was my experience in giving Lovable a shot. The onboarding process was literally just setting me up for failure by asking me to describe the detailed app I was attempting to build.

      Taking it piece by piece in Claude Code has been significantly more successful.

    • oulipo211 minutes ago
      Exactly. The LLMs are quite good at "code inpainting", eg "give me the outline/constraints/rules and I'll fill-in the blanks"

      But not so good at making (robust) new features out of the blue

    • apercu16 minutes ago
      I actually enjoy writing specifications. So much so that I made it a large part of my consulting work for a huge part of my career. SO it makes sense that working with Gen-AI that way is enjoyable for me.

      The more detailed I am in breaking down chunks, the easier it is for me to verify and the more likely I am going to get output that isn't 30% wrong.

    • jedbrookean hour ago
      so many times I catch myself asking a coding agent e.g “please print the output” and it will update the file with “print (output)”.

      Maybe there’s something about not having to context switch between natural language and code just makes it _feel_ easier sometimes

  • EastLondonCoderan hour ago
    This matches my experience, especially "don’t draw the owl" and the harness-engineering idea.

    The failure mode I kept hitting wasn’t just "it makes mistakes", it was drift: it can stay locally plausible while slowly walking away from the real constraints of the repo. The output still sounds confident, so you don’t notice until you run into reality (tests, runtime behaviour, perf, ops, UX).

    What ended up working for me was treating chat as where I shape the plan (tradeoffs, invariants, failure modes) and treating the agent as something that does narrow, reviewable diffs against that plan. The human job stays very boring: run it, verify it, and decide what’s actually acceptable. That separation is what made it click for me.

    Once I got that loop stable, it stopped being a toy and started being a lever. I’ve shipped real features this way across a few projects (a git like tool for heavy media projects, a ticketing/payment flow with real users, a local-first genealogy tool, and a small CMS/publishing pipeline). The common thread is the same: small diffs, fast verification, and continuously tightening the harness so the agent can’t drift unnoticed.

    • bdangubican hour ago
      This is the most common answer from people that are rocking and rolling with AI tools but I cannot help but wonder how is this different from how we should have built software all along. I know I have been (after 10+ years…)
      • EastLondonCoder20 minutes ago
        I think you are right, the secret is that there is no secret. The projects I have been involved with thats been most successful was using these techniques. I also think experience helps because you develop a sense that very quickly knows if the model wants to go in a wonky direction and how a good spec looks like.

        With where the models are right now you still need a human in the loop to make sure you end up with code you (and your organisation) actually understands. The bottle neck has gone from writing code to reading code.

  • sho_hnan hour ago
    Much more pragmatic and less performative than other posts hitting frontpage. Good article.
    • alteroman hour ago
      Finally, a step-by-step guide for even the skeptics to try to see what spot the LLM tools have in their workflows, without hype or magic like I vibe-coded an entire OS, and you can too!.
  • 0xbadcafebee7 minutes ago
    [delayed]
  • senko19 minutes ago
    For those wondering how that looks in practice, here's one of OP's past blog posts describing a coding session to implement a non-trivial feature: https://mitchellh.com/writing/non-trivial-vibing (covered on HN here: https://news.ycombinator.com/item?id=45549434)
  • underdeserver27 minutes ago
    > At a bare minimum, the agent must have the ability to: read files, execute programs, and make HTTP requests.

    That's one very short step removed from Simon Willison's lethal trifecta.

    • recursive15 minutes ago
      I'm definitely not running that on my machine.
  • davidw21 minutes ago
    This seems like a pretty reasonable approach that charts a course between skepticism and "it's a miracle".

    I wonder how much all this costs on a monthly basis?

    • tptacek20 minutes ago
      As long as we're on the same page that what he's describing is itself a miracle.
  • polyrand10 minutes ago
    > a period of inefficiency

    I think this is something people ignore, and is significant. The only way to get good at coding with LLMs is actually trying to do it. Even if it's inefficient or slower at first. It's just another skill to develop [0].

    And it's not really about using all the plugins and features available. In fact, many plugins and features are counter-productive. Just learn how to prompt and steer the LLM better.

    [0]: https://ricardoanderegg.com/posts/getting-better-coding-llms...

  • raphinouan hour ago
    I recently also reflected on the evolution of my use of ai in programming. Same evolution, other path. If anyone is interested: https://www.asfaload.com/blog/ai_use/
  • apercu13 minutes ago
    I find it interesting that this thread is full of pragmatic posts that seem to honestly reflect the real limits of current Gen-Ai.

    Versus other threads (here on HN, and especially on places like LinkedIn) where it's "I set up a pipeline and some agents and now I type two sentences and amazing technology comes out in 5 minutes that would have taken 3 devs 6 months to do".

  • mwigdahl2 hours ago
    Good article! I especially liked the approach to replicate manual commits with the agent. I did not do that when learning but I suspect I'd have been much better off if I had.
  • fix4funan hour ago
    Thanks for sharing your experiences :)

    You mentioned "harness engineering". How do you approach building "actual programmed tools" (like screenshot scripts) specifically for an LLM's consumption rather than a human's? Are there specific output formats or constraints you’ve found most effective?

  • butler14an hour ago
    I'd be interested to know what agents you're using. You mentioned Claude and GPT in passing, but don't actually talk about which you're using or for which tasks.
  • jonathanstrange20 minutes ago
    There are so many stories about how people use agentic AI but they rarely post how much they spend. Before I can even consider it, I need to know how it will cost me per month. I'm currently using one pro subscription and it's already quite expensive for me. What are people doing, burning hundreds of dollars per month? Do they also evaluate how much value they get out of it?
    • JoshuaDavid17 minutes ago
      Low hundreds ($190 for me) but yes.
  • jeffrallen21 minutes ago
    > babysitting my kind of stupid and yet mysteriously productive robot friend

    LOL, been there, done that. It is much less frustrating and demoralizing than babysitting your kind of stupid colleague though. (Thankfully, I don't have any of those anymore. But at previous big companies? Oh man, if only their commits were ONLY as bad as a bad AI commit.)

  • vonneumannstan2 hours ago
    For the AI skeptics reading this, there is an overwhelming probability that Mitchell is a better developer than you. If he gets value out of these tools you should think about why you can't.
    • jorvi32 minutes ago
      The AI skeptics instead stick to hard data, which so far shows a 19% reduction in productivity when using AI.
    • recursive13 minutes ago
      Perhaps that's the reason. Maybe I'm just not a good enough developer. But that's still not actionable. It's not like I never considered being a better developer.
    • z0ran hour ago
      I'm not as good as Fabrice Bellard either but I don't let that bother me as I go about my day.
    • dakiol2 hours ago
      Don't get it. What's the relation between Mitchell being a "better" developer than most of us (and better is always relative, but that's another story) and getting value out of AI? That's like saying Bezos is a way better businessman than you, so you should really hear his tips about becoming a billionaire. No sense (because what works for him probably doesn't work for you)

      Tons of respect for Mitchell. I think you are doing him a disservice with these kinds of comments.

      • tux1968an hour ago
        Maybe you disagree with it, but it seems like a pretty straightforward argument: A lot of us dismiss AI because "it can't be trusted to do as good a job as me". The OP is arguing that someone, who can do better than most of us, disagrees with this line of thinking. And if we have respect for his abilities, and recognize them as better than our own, we should perhaps re-assess our own rationale in dismissing the utility of AI assistance. If he can get value out of it, surely we can too if we don't argue ourselves out of giving it a fair shake. The flip side of that argument might be that you have to be a much better programmer than most of us are, to properly extract value out of the AI... maybe it's only useful in the hands of a real expert.
        • jplusequaltan hour ago
          >A lot of us dismiss AI because "it can't be trusted to do as good a job as me"

          Some of us enjoy learning how systems work, and derive satisfaction from the feeling of doing something hard, and feel that AI removes that satisfaction. If I wanted to have something else write the code, I would focus on becoming a product manager, or a technical lead. But as is, this is a craft, and I very much enjoy the autonomy that comes with being able to use this skill and grow it.

          • mitchellhan hour ago
            There is no dichotomy of craft and AI.

            I consider myself a craftsman as well. AI gives me the ability to focus on the parts I both enjoy working on and that demand the most craftsmanship. A lot of what I use AI for and show in the blog isn’t coding at all, but a way to allow me to spend more time coding.

            This reads like you maybe didn’t read the blog post, so I’ll mention there many examples there.

          • fizxan hour ago
            I enjoy Japanese joinery, but for some reason the housing market doesn't.
          • tux1968an hour ago
            Nobody is trying to talk anyone out of their hobby or artisanal creativeness. A lot of people enjoy walking, even after the invention of the automobile. There's nothing wrong with that, there are even times when it's the much more efficient choice. But in the context of say transporting packages across the country... it's not really relevant how much you enjoy one or the other; only one of them can get the job done in a reasonable amount of time. And we can assume that's the context and spirit of the OP's argument.
            • mold_aidan hour ago
              >Nobody is trying to talk anyone out of their hobby or artisanal creativeness.

              Well, yes, they are, some folks don't think "here's how I use AI" and "I'm a craftsman!" are consistent. Seems like maybe OP should consider whether "AI is a tool, why can't you use it right" isn't begging the question.

              Is this going to be the new rhetorical trick, to say "oh hey surely we can all agree I have reasonable goals! And to the extent they're reasonable you are unreasonable for not adopting them"?

            • jplusequaltan hour ago
              >But in the context of say transporting packages across the country... it's not really relevant how much you enjoy one or the other; only one of them can get the job done in a reasonable amount of time.

              I think one of the more frustrating aspects of this whole debate is this idea that software development pre-AI was too "slow", despite the fact that no other kind of engineering has nearly the same turn around time as software engineering does (nor does they have the same return on investment!).

              I just end up rolling my eyes when people use this argument. To me it feels like favoring productivity over everything else.

    • mold_aidan hour ago
      "Why can't you be more like your brother Mitchell?"
  • xystan hour ago
    [flagged]
    • dang32 minutes ago
      "Please don't post shallow dismissals, especially of other people's work. A good critical comment teaches us something."

      "Don't be snarky."

      "Don't be curmudgeonly. Thoughtful criticism is fine, but please don't be rigidly or generically negative."

      https://news.ycombinator.com/newsguidelines.html

  • therein2 hours ago
    [flagged]
    • dangan hour ago
      Ok, but please don't post unsubstantive comments to Hacker News.
    • stronglikedanan hour ago
      most AI adoption journeys are
    • alteroman hour ago
      >Underwhelming

      Which is why I like this article. It's realistic in terms of describing the value-propositio of LLM-based coding assist tools (aka, AI agents).

      The fact that it's underwhelming compared to the hype we see every day is a very, very good sign that it's practical.