46 pointsby samizdis4 days ago4 comments
  • sarlalian4 days ago
    This has already been discussed heavily in this thread:

    https://news.ycombinator.com/item?id=44522772

    Link to the full paper: https://metr.org/Early_2025_AI_Experienced_OS_Devs_Study.pdf

    Overall the study is a very small sample size (16), with mixed AI tooling and mixed AI experience. It's an interesting data point, but honestly not an extensive enough study to make any causal determination. It's certainly plagued by much of the discourse around AI being highly polarized, as well as AI being such a broad category as to have little meaning overall.

    Quoting from the above thread:

    > My intuition here is that this study mainly demonstrated that the learning curve on AI-assisted development is high enough that asking developers to bake it into their existing workflows reduces their performance while they climb that learning curve.

    The above quote, very much matches my personal experience. The first month or two was very hit or miss, and plagued with frustration. As I got better using the tools, and figured out new workflows, and settled on better tools, it's become a much better experience for me. Specifically, asking ChatGPT or Claude to generate a function for me sucked, editor tab completion with a good model was better, but still occasionally frustrating, chat in cursor was better than that, and claude code as an agent has been fantastic. But the journey was long and required a lot of reading, video watching, and listening to podcasts about how people who are successfully using AI coding tools work.

    Currently I feel like I'm about 2x as productive (note: I'm not a particularly quick developer, so YMMV).

    • Larrikin4 days ago
      Which podcast did you find useful?
  • hoppp4 days ago
    This is exactly my experience as I started heavily using LLMs for coding. It can feel like a trap,Im sifting through all the generated code instead of reading the docs and finding the correct way to do things, because I expect the machine to output the answer, I spend a lot of time prompting.

    When it works on the first prompt its magic. I especially like to generate UI components, but for more complex things its a major time waster. Often complex functions just dont work and debugging is slower than rewriting it from scratch.

  • i_niks_864 days ago
    Ironically, AI tools can make you slower if you rely on them for complex tasks without understanding the internals. You end up spending more time prompting, debugging, or reverse-engineering the generated code than if you just wrote it yourself. This tradeoff is especially noticeable in open source work, where maintainability and correctness trump speed. AI tooling still requires substantial human judgment.
  • pitched4 days ago
    I believe this but there is another side of it where it doesn’t feel as tiring. I have more energy left after a longer AI session than a shorter traditional one. That’s worth a lot.
    • xorbax4 days ago
      But are you accomplishing the same amount and being equally effective, or just accomplishing less over the same amount of time?
      • bobbiechen4 days ago
        It's hard to self evaluate productivity. In a much simpler domain (decoding a cipher with a tool vs. by hand), I thought I was going much faster, but the stopwatch showed it was about the same: https://bobbiechen.com/blog/2020/5/28/the-making-of-semaphor...

        Not feeling tired afterwards is a real improvement though, and I think that feeling is reliably self-reported.

      • pitched4 days ago
        AI is very effective at the boilerplate-heavy tasks that I hate and very ineffective at the architecture and debugging tasks that I love. We work well together.
    • ath3nd4 days ago
      Anecdotally I have far less energy after an AI session and feel like I have accomplished less in more time.