38 pointsby dnlserrano11 hours ago18 comments
  • pseudony9 hours ago
    I wonder how. Everything I let claude code majorly write, whether Go, F#, C or Python, I end up eventually at a point where I systematically rip it apart and start writing it over.

    In my study days, we talked of “spikes”. Software or components which functionally addressed some need, but often was badly written and architected.

    That’s what I think most resembles claude code output.

    And I ask the llm to write todo-lists, break tasks into phases, maintain both larger docs on individual features and a highly condensed overview doc. I also have written claude code like tools myself, run local LLMs and so on. That is to say, I may still be “doing it wrong”, but I’m not entirely clueless .

    The only place where claude code has nearly done the whole thing and largely left me with workable code was some react front-end work I did (and no, it wasn’t great either, just fair enough).

    • dave1999x2 hours ago
      Have you tried with Opus 4.5. It's a step change IMO.
    • flashgordon2 hours ago
      There are different degrees of "ai wrote all my code". A very crappy way of doing it is to keep on one shotting it expecting it to "fall on the right solution" - very much infinite monkeys, infinite typewriters scenario.

      The other way is to spend a fair bit of time building out a design and ask it to implement it while verifying what it is producing then and there instead of reviewing reams of slop later on. AI still produced 100% - just that it is not as glamorous or as marketing friendly of a sound bite. After all which product manager wants to understand refactoring or TDD or SOLID or design principles etc?

    • aurareturn9 hours ago
      Because companies/users don’t pay for “great code”. They pay for results.

      Does it work? How fast can we get it? How much does it cost to use it?

      • janice19999 hours ago
        > Because companies/users don’t pay for “great code”

        Unless you work in an industry with standards, like medical or automotive. Setting ISO compliance aside, you could also work for a company which values long term maintainability, uptime etc. I'm glad I do. Not everyone is stuck writing disposable web apps.

        • ekjhgkejhgk8 hours ago
          Or space, or defense, or some corners of finance and insurance.

          > Not everyone is stuck writing disposable web apps.

          Exactly. What I've noticed is that the a lot of the conversations on HNs is web devs talking to engineers and one side understands boths sides, and other one doesn't.

        • aurareturn7 hours ago
          "Does it work?" covers what you said.
        • Craighead7 hours ago
          [dead]
      • castis8 hours ago
        Sounds like the best way to sell an OK product.
      • bradleykingz8 hours ago
        yes, but to achieve those, one often needs great code
        • aurareturn7 hours ago
          LLM use does not equal bad code.
  • akmarinov9 hours ago
    I’m one of those people.

    Used Claude Code until September then Codex exclusively.

    All my code has been AI generated, nothing by hand.

    I review the code and if I don’t like something- I let it know how it should be changed.

    Used to be a lot of back and forth in August, but these days GPT 5.2 Codex one shots everything so far. It worked for 40 hours for me one time to get a big thing in place and I’m happy with the code.

    For bigger things start with a plan and go back and forth on different pieces, have it write it to an md file as you talk it through, feed it anything you can - user stories, test cases, design, whiteboards, backs of napkins and in the end it just writes the code for you.

    Works great, can’t fathom going back to writing everything by hand.

    • zenethian5 hours ago
      Okay but has this process actually improved anything, or just substituted one process for another? Do you have fewer defects, quicker ticket turnaround, or some other metric you’re judging success?
      • akmarinov5 hours ago
        Oh yeah, I’ve been a lot more productive, closing tickets faster.

        These tools are somewhat slow, so you need to work on several things at once, so multitasking is vital.

        When i get defects from the QA team, I spawn several agents with several worktrees that do each of the tickets- then i review the code, test it out and leave my notes.

        Closing the loop is also vital, if agents can see their work, logs, test results it helps them to be more autonomous

    • 1122338 hours ago
      Glad to hear. For me, the process does not converge — once code gets big enough (it happens fast, claude hates using existing code and writes duplicate logic every oportunity it gets) it starts dealing more damage every turn. At some point, no forward progress happens, because claude keeps dismantling and breaking existing working code.
    • skibidithink8 hours ago
      How long did it take for you to get used to this workflow?
  • real_joschi10 hours ago
    > I landed 259 PRs -- 497 commits, 40k lines added, 38k lines removed

    I wonder how much of these 40k lines added/38k lines removed were just replacing the complete code of a previous PR created by Claude Code.

    I'm happy that it's working for them (whatever that means), but shouldn't we see an exponential improvement in Claude Code in this case?

    • nikanj10 hours ago
      One dives deep into to the philosophical here, but how different is that from ”I recompiled the code, which removed 500kloc of assembly and created 503kloc of assembly”
      • GCUMstlyHarmls10 hours ago
        No one says that as a linkedin metric though.
  • raphman10 hours ago
    Claude Code user¹ says Claude Code wrote continuously incorrect code for the last hour.

    I asked it to write Python code to retrieve a list of Kanbord boards using the official API. I gave it a link to the API docs. First, it wrote a wrong JSONRPC call. Then it invented a Python API call that does not exist. In a new try, I I mentioned that there is an official Python package that it could use (which is prominently described in the API docs). Claude proceeded to search the web and then used the wrong API call. Only after prompting it again, it used the correct API call - but still used an inelegant approach.

    I still find some value in using Claude Code but I'm much happier writing code myself and rather teach kids and colleagues how to do stuff correctly than a machine.

    ¹) me

  • turblety10 hours ago
    I’m nearly the same. Though I do find I’m still writing code, just not the code that’s ending up in the commit. I’ll write pseudo code, example code, rough function signatures then Claude writes the rest.
  • GeoAtreides5 hours ago
    Man that has vested financial interest in thing praises thing
  • 578_Observer10 hours ago
    "If the AI builds the house, the human must become the Architect who understands why the house exists."

    In Japanese traditional carpentry (Miya-daiku), the master doesn't just cut wood. He reads the "heart of the tree" and decides the orientation based on the environment.

    The author just proved that "cutting wood" (coding) is now automated. This is not the end of engineers, but the beginning of the "Age of Architects."

    We must stop competing on syntax speed and start competing on Vision and Context.

    • clrflfclrf10 hours ago
      Taste, Aesthetics, Gestalt Synergy now matter more.
      • 578_Observer10 hours ago
        Precisely.

        AI optimizes for "Accuracy" (minimizing error), but it cannot optimize for "Taste" because Taste is not a variable in its loss function.

        As code becomes abundant and cheap, "Aesthetics" and "Gestalt" will become the only scarcity left. The Architect's job is not to build, but to choose what is beautiful.

        • mmasu9 hours ago
          I use the house analogy a lot these days. A colleague vibe-coded an app and it does what it is supposed to, but the code really is an unmaintainable hodgepodge of files. I compare this to a house that looks functional on the surface, but has the toilet in the middle of the living room, an unsafe electrical system, water leaks, etc. I am afraid only the facade of the house will need to be beautiful, only to realize that they traded off glittery paint for shaky foundations.
          • 578_Observer7 hours ago
            I've been a loan officer for 20 years.

            To extend your analogy: AI is effectively mass-producing 'Subprime Housing'. It has amazing curb appeal (glittering paint), but as a banker, I'd rate this as a 'Toxic Asset' with zero collateral value.

            The scary part is that the 'interest rate' on this technical debt is variable. Eventually, it becomes cheaper to declare bankruptcy (rewrite from scratch) than to pay off the renovation costs.

            • ragequittah5 hours ago
              My experience with it is the code just wouldn't have existed in the first place otherwise. Nobody was going to pay thousands of dollars for it and it just needs to work and be accurate. It's not the backend code you give root access to on the company server, it's automating the boring aspects of the job with a basic frontend.

              I've been able to save people money and time. If someone comes in later and has a more elegant solution for the same $60 effort I spent great! Otherwise I'll continue saving people money and time with my non-perfect code.

              • 578_Observer4 hours ago
                That's a fair point.

                In banking terms, you are treating AI code as "OPEX" (Operating Expense) rather than "CAPEX" (Capital Expenditure). As long as we treat these $60 quick-fixes as "depreciating assets" (use it and throw it away), it’s great ROI.

                My warning was specifically about the danger of mistaking these quick-fixes for "Long-term Capital Assets." As long as you know it's a disposable tool, not a foundation, we are on the same page.

      • brador10 hours ago
        >Taste, Aesthetics, Gestalt Synergy now matter more.

        The AI is better at that too. Truth is, nothing matters except the maximal delusion. Only humans can generate that. Only humans can make a goal they find meaningful.

  • cataphract10 hours ago
    It shows, I have to kill it forcefully over 10 times per day.
  • pragmatic6 hours ago
    The guy who write the typescript/bun cli and probably maintains that?

    It would be helpful if people also included what kind of code they are writing (language, domain, module, purpose, etc)

    The hallucinations are still there, sometimes worse than others but manageable. This is mostly when I have to do some database management style work. This is off the beaten path and hallucinations are crazy.

    • pragmatic6 hours ago
      I had to add some “shim” code to an older app to bridge authentication to a new rest endpoint I added to an aging monolith.

      It actually didn’t too bad after some back and forth and this too was off the beaten path (hard to find a stack overflow/blog post/etc putting it all together)

      Totally worth the $20/mo!!

  • real_joschi10 hours ago
    View the full thread without Twitter/X account: https://xcancel.com/bcherny/status/2004897269674639461
  • 10 hours ago
    undefined
  • kachapopopow10 hours ago
    honestly i've been becoming too lazy, I know exactly what I want and AI is at a point where it can turn that into code. It's good enough to a point where I start to design code around AI where it's easier for AI to understand (less DRY, less abtractions, closer to C)

    And it's probably a bad thing? Not sure yet.

    • eurekin10 hours ago
      I just let myself use AI on non-critical software. Personal projects and projects without deadline or high quality standards.

      If it uses anything I don't know, some tech I hadn't grasped yet, I do a markdown conversation summary and make sure to include technical solutions overview. I then shove that into note software for later and, at a convenient time, use that in study mode to make sure I understand implications of whatever AI chose. I'm mostly a backend developer and this has been a great html+css primer for me.

    • 578_Observer10 hours ago
      It is not bad. It is mastery.

      You are treating the AI not as a tool, but as a "Material" (like wood or stone).

      A master carpenter works with the grain of the wood, not against it. You are adapting your architectural style to the grain of the AI model to get the best result.

      That is exactly what an Architect should do. Don't force the old rules (DRY) on a new material.

  • uaas10 hours ago
    First I thought CC wrote all its code, but it’s about the engineer’s contributions to CC, which is quite different.
  • rs_rs_rs_rs_rs10 hours ago
    I'm sure it's unrelated(right guys? right?) but they had to revert a big update to CC this month.

    https://x.com/trq212/status/2001848726395269619

    • akmarinov9 hours ago
      They didn’t have to, they decided that it’ll be more stable to revert them for the holidays, so that they won’t be in the office fixing issues on Christmas.

      You can read more about it at https://steipete.me/posts/2025/signature-flicker

    • chrisjj10 hours ago
      What %age of his reversions this month are done by Claude? ;)
    • outside123410 hours ago
      Not sure why you are getting downvoted, but this IS the key worry: That people lose contact with the code and really don’t understand what is going on, increasing “errors” in production (for some definition of error), that result in much more production firefighting that, then, reduce the amount of time to write code.
      • K0nserv10 hours ago
        Losing contact with the code is definitely on my mind too. Just like how writing can be a method of thinking, so can programming. I fear that only by suffering through the implementation will you realise the flaws of your solution. If this is done by an LLM you are robbed the opportunity and produce a worse solution.

        Still, I use LLM assisted coding fairly frequently, but this is a nagging feeling I have.

      • chrisjj10 hours ago
        > Not sure why you are getting downvoted

        A: The comment is bad for business.

  • izacus10 hours ago
    Cool, the person who financially benefits from hyping AI is hyping AI.

    What's with the ad here though?

    • cube0010 hours ago
      The tweet from Dec 24 was interesting, why is Boris only now deciding to engage?

      I refuse to believe real AI conversations of any value are happening on X.

      Hi I'm Boris and I work on Claude Code. I am going to start being more active here on X, since there are a lot of AI and coding related convos happening here.

      https://xcancel.com/bcherny/status/2003916001851686951

  • deafpolygon8 hours ago
    does that count as self-hosting?
  • outside123410 hours ago
    I mean, that’s possible, but the more interesting datapoint would be “and then how much did you have to delete and/or redo because it was slop”
  • binaryturtle10 hours ago
    IMHO it's very misleading to claim that some LLM wrote all the code, if it's just a compression of thousands of peoples' codes that lead to this very LLM even having something to output.
    • throw-the-towel10 hours ago
      Is a human engineer not the same way?
      • onion2k10 hours ago
        No. LLMs can only reorder what they've seen in training data in novel ways. Humans can have original ideas that aren't in their training data. As a trivial example, John Carmack invented raycasting for Wolfenstein 3D. No matter how much prompting you could have given an LLM it could never have done that because there was no prior art for it to have been trained on.

        In pragmatic terms though, innovation like that doesn't happen often. LLMs could do the things that most developers do.

        That said, I don't agree with the notion that LLMs are simply generating content based on their training data. That ignores all the work of the AI devs who build systems to take the training data and make something that creates new (but not innovative) things from it.

        • clrflfclrf10 hours ago
          Humans can have original ideas because they forget 99% of their input. I am of the opinion there are no original ideas. Most of what most humans do is just remix and reshaping like a potter shapes the clay.
          • verzali8 hours ago
            So in the end you believe everything is just a remix of two rocks banging together?
        • 10 hours ago
          undefined
        • chrisjj10 hours ago
          > John Carmack invented raycasting for Wolfenstein 3D.

          No. He merely reimplemented it.

        • decremental10 hours ago
          [dead]