81 pointsby ingve7 hours ago28 comments
  • skybrian4 hours ago
    > People who love using AI to create software are loving it because they don’t value the act of creating & understanding the software.

    This happens, but it's only one way to use a coding agent. I'm working on a small, personal project, but I ask it to do "code health" tasks all the time, like complicated refactorings, improving test coverage, improving the tooling surrounding the code, and fixing readability issues by renaming things. Project quality keeps getting better. I like getting the code squeaky clean and now I have a power washer.

    You do have to ask for these things, though.

    Some people like using hand tools and others use power tools, but our goals aren't necessarily all that different.

    • impulser_2 hours ago
      My favorite thing is just being able to talk through the code and problem and have someone right there too response even if it not 100% right it still gets you to think and it nice to have it push back on things you ask ect. It basically a co worker you can bug all day and everytime they are still happy to help.
    • jsty3 hours ago
      Indeed, one of my favourite things about coding assistants is that I can now get an easy code review on my personal projects, or once I've thought through my approach have it think up alternatives I may not have stumbled on.

      I've found it very unsatisfactory (both experience and results) to use them to replace code production. But in terms of augmenting the process - used to critique, explore alternatives, surface information - they're getting really quite handy.

    • tripledry3 hours ago
      And there are different contexts.

      In reality there are tons of tasks at work that are boring and time constrained. There are days I don't enjoy it, and days I do. It's not binary - I still love programming by hand but at times I let Agents work whilst reviewing the results.

    • AstroBen2 hours ago
      Lets not just assume that we know whether AI is a hand tool or a power tool
    • officialchicken3 hours ago
      It works fine for webapps and other slop-adjacent projects.

      If you try to do anything outside of typical n-tiered apps (e.g. implement a well documented wire protocol with several reference implementations on a microcontroller) it all falls apart very very quickly.

      If the protocol is even slightly complex then the docs/reqs won't fit in the context with the code. Bootstrapping / initial bring-up of a protocol should be really easy but Claude struggles immensely.

      • Uehrekaan hour ago
        > (e.g. implement a well documented wire protocol with several reference implementations on a microcontroller)

        I have had an AI assistant reverse engineer a complex TCP protocol (3-simultaneous connections each with a different purpose, all binary stuff) from a bunch of PCAPs and then build a working Python server to speak that protocol to a 20-year-old Windows XP client. Granted, it took two tries: Claude Opus 4.1 (this was late September) was almost up to the task, but kept making small mistakes in its implementation that were getting annoying. So I started fresh with Codex CLI and GPT-5.1-Codex had a working version in a couple hours. Model and tool quality can have a huge impact on this stuff.

      • earthnail3 hours ago
        I just vibe coded a VST. Runs a mix of realtime DSP and ML models. Really nontrivial stuff. It does exactly what I want.

        Claude Opus 4.5 is truly impressive.

      • PaulHoule3 hours ago
        I hear people report the opposite.

        The sloppier a web app is, the more CSS frameworks are fighting for control of every pixel, and simply deleting 500,000 files to clear out your node_modules brings Windows to its knees.

        On the other hand, anything you can fit in a small AVR-8 isn't very big.

        Whatever you do, your mileage may vary.

        • skybrian2 hours ago
          Yep, but I don’t intend to let that happen to my web app! It’s not that big and I intend to keep it that way.

          Dependencies are minimal. There’s no CSS framework yet and it’s a little messy, but I plan to do an audit of HTML tag usage, CSS class usage, and JSX component usage. We (the coding agent and I) will consider whether Tailwind or some other framework would help or not. I’ll ask it to write a design doc.

          I’m also using Deno which helps.

          Greenfield personal projects can be fun. It’s tough to talk about programming in the abstract when projects vary so much.

        • officialchicken3 hours ago
          Given the amount of Arduino code that existed at the time LLM's were trained, I would have to agree that AVR-8 might be fine. For now it's on the Cortex-M struggle bus.
  • gherkinnn3 hours ago
    > People who love using AI to create software are loving it because they don’t value the act of creating & understanding the software.

    Speak for yourself, OP. I have my gripes with LLMs but they absolutely can and will help me create and understand the code I write.

    > At least, they value it far less than the end result.

    This does not appear to apply to OP at all, but plenty of programmers who like code for the sake of code create more problems than they solve.

    In summary, LLMs amplify. The bad gets worse and the good gets better.

    • dgacmu3 hours ago
      The thing that gets me is the assumption that we're not complex creatures who might each value different things at different times and in different contexts.

      As for me, sometimes I code because I want something to do a specific thing, and I honestly couldn't be bothered to care how it happens.

      Sometimes I code because I want something to work a very specific way or to learn how to make something work better, and I want to have my brain so deep in a chunk of code I can see it in my sleep.

      Sometimes the creative expression is in the 'what' - one of my little startup tasks these days is experimenting with UI designs for helping a human get a task done as efficiently as possible. Sometimes it's in the 'how' - implementing the backend to that thing to be ridiculously fast and efficient. Sometimes it's both and sometimes it's neither!

      A beautiful thing about code is that it can be a tool and it can be an expressive medium that scratches our urge to create and dive into things, or it can be both at the same time. Code is the most flexible substance on earth, for good and for creating incredible messes.

      • PaulHoule3 hours ago
        I'll argue the the LLM can be a great ally when "I want to have my brain so deep in a chunk of code I can see it in my sleep" because it can help you see the whole picture.
    • simonw3 hours ago
      Right - it's not that I don't value "the act of creating & understanding the software" - that's the part I care about and enjoy the most.

      The thing I don't value is typing out all of that code myself.

      • nobleach3 hours ago
        I think I can get on board with this view. In the earlier LLM days, I was working on a project that had me building models of different CSV's we'd receive from clients. I needed to build classes that represented all the fields. I asked AI to do it for me and I was very pleased with the results. It saved me an hour-long slog of copying the header rows, pasting into a class, ensuring that everything was camel-cased, etc. But the key thing here is that that work was never going to be the "hard part". That was the slog. The real dopamine hit was from solving the actual problem at hand - parsing many different variants of a file, and unifying the output in a homogenous way.

        Now, if I had just said, "Dear Claude, make it so I can read files from any client and figure out how to represent the results in the same way, no matter what the input is". I can agree, I _might_ be stepping into "you're not gonna understand the software"-land. That's where responsibility comes into play. Reading the code that's produced is vital. I however, am still not at the point where I'm giivng feature work to LLMs. I make a plan for what I want to do, and give the mundane stuff to the AI.

      • horsawlarway3 hours ago
        Just to poke at this a bit -

        Isn't this a bit like saying you love storytelling, but you don't value actually speaking the words?

        Because this feels very close to skating across a line where you don't actually understand or value the real medium.

        Basically - the architectural equivalent of this leads to things like: https://en.wikipedia.org/wiki/Hyatt_Regency_walkway_collapse

        Where the architects are divorced from the actual construction, and the end result is genuinely terrible.

        • simonw3 hours ago
          Not typing every line of code myself doesn't divorce me from the construction.

          I frequently find that the code I write using agents is better code, because small improvements no longer cost me velocity or time. If I think "huh, I should really have used a different pattern for that but it's already in 100+ places around the codebase" fixing it used to be a big decision... now it's a prompt.

          None of my APIs lack interactive debugging tools any more. Everything that needs documentation is documented. I'm much less likely to take on technical debt - you take that on when fixing it would cost more time than you have available, but those constraints have changed for me.

          • horsawlarway3 hours ago
            But... that's exactly the kind of thing I'm referring to.

            You're blanket replacing chunks of code without actually considering the context of each one.

            Personally - I still have mixed feelings about it. The Hyatt Regency walkway was literally one of the examples brought up in my engineering classes about the risks of doing "simple pattern changes". I'm not referencing it out of thin air...

            ---

            Havens Steel Company had manufactured the rods, and the company objected that the whole rod below the fourth floor would have to be threaded in order to screw on the nuts to hold the fourth-floor walkway in place. These threads would be subject to damage as the fourth-floor structure was hoisted into place. Havens Steel proposed that two separate and offset sets of rods be used: the first set suspending the fourth-floor walkway from the ceiling, and the second set suspending the second-floor walkway from the fourth-floor walkway.[22]

            This design change would be fatal. In the original design, the beams of the fourth-floor walkway had to support the weight of the fourth-floor walkway, with the weight of the second-floor walkway supported completely by the rods. In the revised design, however, the fourth-floor beams supported both the fourth- and second-floor walkways, but were strong enough for 30% of that load.

            ---

            Just use a different pattern? In this case, the steel company also believed it was a quick pattern improvement... they avoided a complex installation issue with threaded rods. Too bad it killed some 114 people.

            • simonw3 hours ago
              But I am considering the context of each one. It's just quicker not to modify the code by hand.

              I'm going to use a human comparison here, even though I try to avoid them. It's like having a team of interns who you explain the refactoring to, send them off to help get it done and review their work at the end.

              If the interns are screwing it up you notice and update your instructions to them so they can try again.

              • horsawlarway3 hours ago
                I guess. And I don't mean that as a jab at you, I read a lot of your content and agree with quite a bit of it - I'm just personally conflicted here still.

                I've worked in a couple positions where the software I've written does actually deal directly with the physical safety of people (medical, aviation, defense) - which I know is rare for a lot of folks here.

                Applying that line of thinking to those positions... I find it makes me a tad itchy.

                I think there's a lot of software where I don't really mind much (ex - almost every SaaS service under the sun, most consumer grade software, etc).

                And I'm absolutely using these tools in those positions - so I'm not really judging that. I'm just wondering if there's a line we should be considering somewhere here.

                • simonw2 hours ago
                  I've avoided working directly on safety critical software throughout my career because the idea that my mistakes could hurt people frightens me.

                  I genuinely feel less nervous about working on those categories of software if I can bring coding agents along for the ride, because I'm confident I can use those tools to help me write software that's safer and less likely to have critical bugs.

                  Armed with coding agents I can get to 100% test coverage, and perform things like fuzz testing, and get second and third opinions on designs, and have conversations about failure patterns that I may not personally have considered.

                  For me, coding agents represent the ability for me to use techniques that were previously constrained by my time. I get more time now.

        • quest883 hours ago
          No, I don't think so. But my context is different as is anyone's reply about their LLM usage.

          I'm still creating software but with English that's compiled down to some other language.

          I'm personally comfortable reading code in many languages. That means I'm able (hopefully!) to spot something that doesn't look quite right. I don't have to be the one pressing keys on the keyboard but I'm still accountable for the code i compile and submit.

      • lifeisoverforme2 hours ago
        [flagged]
    • lifeisoverforme2 hours ago
      I think you’re in the pleading stage. The AI tools this year will do the whole process without you. Reading the code will be a luxury for little benefit.
    • szundi3 hours ago
      [dead]
  • lokimedes3 hours ago
    There may be a developmental arc to this. I once enjoyed programming, in all its forms. I loved to express my ideas as projects. Now after three decades of programming, I have seen 95% of the problems I have to solve in any given project before, and that enjoyment is no longer there. So for me, Claude Code is simply excellent. I was once a Systems Engineer, so writing specs, requirements and architecture documents is second nature to me, and I can easily review the code it comes with - but I no longer need to write CURD, boilerplate and all that jazz, not to mention managing dependencies and version creep in libraries.

    For an old grey beard, this is actually fine, but if you're still in love with coding, it must be a loss.

  • MrScruff4 hours ago
    I think this is a reasonable article and I do understand the perspective. It's a normal sentiment from those witnessing a craft that they have invested time in mastering become partially automated. However obviously the big picture (as the article alludes to) is that none of us are paid to write software, we're paid to solve problems as efficiently as possible and these tools, used wisely, can massively help with that.
    • netdevphoenix3 hours ago
      > we're paid to solve problems as efficiently as possible and these tools, used wisely, can massively help with that.

      They can also help reduce the number of devs a company needs to operate and arguably reduce the skill level needed to generate software changes that provide business value. While it is true that these tools could lead to companies keeping the staff count and increasing productivity, the reality is that the productivity increase from these tools isn't big enough to offset the big dev team expense while the tools are argubly increasing productivity high enough that you can do more with less.

      And yes, companies laying off folks because AI might suffer in the next decade or two. But that won't save _you_ from being unemployed _now_ and struggling to find a role until the tide shifts. The market can afford to remain irrational far longer than your average dev can afford to remain unemployed.

  • PaulHoule3 hours ago
    I don't feel this way, but that's because I see them as the coding buddy I never had rather than a slave that knocks off tickets for me.

    Now I can try two or three implementations of something that I can throw away in the process of really understanding a problem and then quickly do it right.

    Instead of spending a day tracking down a problem in my mental blind spot I have a 70% chance of getting the answer in two minutes.

    Instead of overthinking the documentation I can have a prototype running in 10 minutes.

  • sbinnee4 hours ago
    I enjoyed writing git commit messages. I value good commit messages. But at work I recently created a slash command (for those who don’t know, it’s like a snippet for anything) that writes a git commit for staged changes with optional jira tags. Although I always end up amending it, it is so convenient. Does it mean I don’t value good git commit messages? I don’t know… I try my best to review every commit at least.
  • williamcotton4 hours ago
    Here’s a big difference.

    I value writing and playing my own music.

    Most are happy enough to listen to other people’s music.

    Somehow with music this isn’t a religious war.

    • tarwichan hour ago
      Something about this prose feels like a short poem of sorts. I like it.
    • rewgs4 hours ago
      I'm sorry to burst your bubble but it absolutely is. The pro-Suno vs anti-Suno discussions are just as heated as those in programming.
      • neogodless4 hours ago
        I have a co-worker who has spoken excitedly about creating AI-generated music. I listened to him talk (and some brief music clips) and didn't tell them I have no interest in it, because he seems passionate about it. But it does not interest me.

        My point is, though, it occurred to me why he's excited about it. He has no ability whatsoever to write music in notes, or song lyrics. But with his tool, he's able to make music that he finds decent enough to feel excited about helping to shape it.

        No criticism to those who can't do a thing on their own, but are excited to be able to do it with a tool. And yes, you can certainly elaborate on and debate craftsmanship, and the benchmarks and measures of quality of an end result when made through expert skill and care, or by amateurs with a powerful (and perhaps imperfect or imprecise) tool.

        So personal anecdote, using generative code has not interested me personally, because I love writing code, and I'm very good at it, and I'm very fast. Of course machines can do things faster than me (once I learn the different skill of prompting), but speed hasn't really been a massively limiting factor for me when trying to build things. (There are lots of other things that can get in my way!)

        I'm reminded of the oft used quote, "He who can, does; he who cannot, teaches" - George Bernard Shaw. (Just, now the teaching is that of a machine, who then does.)

        • jcims3 hours ago
          There's one interesting phenomenon that I noticed in myself and others with generating music with AI. You develop this kind of outsized emotional connection with it, even though your contribution to the 'work' was minimal, the fact that you saw this (arguably not) new 'thing' come into being creates an atypical bond. Not that it's 'mine' but that it's this beautiful thing.

          So you (or in this case I) get all excited about how fantastic it is, but others that hear it are just kind of 'meh'. The only way I know this is listing to songs shared in that same exuberance by others, and to me they are 'meh'.

          I shared this sentiment with some folks and one person said 'yeah, you should try writing your own music sometime...same thing happens' xD

      • gibsonsmog4 hours ago
        Yeah, seriously. I've seen musicians nearly come to blows over tube vs solid state amps. Music has even more anger associated with brands and technique than gaming or tech. It's just not flooding the algos like AI currently is
      • earthnail3 hours ago
        It’s even more heated in music. The copyright problem is bigger there than in any other domain afaics.
      • williamcotton4 hours ago
        I guess I am in my own bubble because I have never heard of Suno until now and I have yet to come across these heated discussions!
        • vee-kay4 hours ago
          Same here, I never heard of Suno until now. Looks like it is sone AI based music generation app, by Warner Bros.

          Interestingly, "Suno" in one of the world languages (Hindi) means "listen".

  • godzillafarts3 hours ago
    > People who love using AI to create software are loving it because they don’t value the act of creating & understanding the software.

    That's a pretty sweeping generalization. Just because I don't value the act of typing into a keyboard doesn't mean that I don't value the craft of creating and understanding software. I am not outsourcing my understanding to the LLM, I am outsourcing the typing of the code.

    What you are describing is not engineering, it's (pardon the phrase) vibe coding. Claude Code is just a tool, and everyone is going to use that tool differently. There is nothing inherent in the tool itself that requires you to surrender your agency and understanding. If you do, that's on you, not Claude.

    • alexalx666an hour ago
      an investment in a fast typing course seems like a significantly more effective, and more ethical solution for you compared to chatting up remote bots that require nuclear power to operate economically
  • Robdel123 hours ago
    I’ve always enjoyed creating working things for people to use. Which is why I love LLM coding agents.

    Is it wild the thing I took a while to learn is now done basically by a GPU? Yes. But hand writing code is not my identity. Never has been.

  • Thews3 hours ago
    I liked the summary of what you do besides write code, and those things are enjoyable to me too. Understanding something better by writing code that unravels the mystery is a treat, but also sometimes frustrating.

    I still do enjoy having an LLM help me through some mental roadblocks, explore alternatives, or give me insight on patterns or languages I'm not immediately familiar with. It speeds up the process for me.

  • John238323 hours ago
    > People who love using AI to create software are loving it because they don’t value the act of creating & understanding the software.

    Why assign YOUR feeling about an interaction to everyone else? LLMs coding agents are a tool for me to investigate and learn about what I'm doing. For others they can be a therapist, or organizational tool, or whatever.

  • netdevphoenix3 hours ago
    > Ultimately, I don’t want my computer’s OS to be vibe-coded, nor my bank’s systems, nor my car software.

    The reality is that the vast majority of software is nowhere near those levels of priority aside from the big social media apps. Like many jobs, many apps out there are helpful because they increase the money flow in the economy not because they are critical

  • mhb3 hours ago
    The origin of the Steinmetz anecdote is less clear than the essay's footnote would suggest:

    https://quoteinvestigator.com/2017/03/06/tap/

  • elcapitan3 hours ago
    > People who love using AI to create software are loving it because they don’t value the act of creating & understanding the software.

    I'm as critical of AI code generation as the next guy, but unfortunately we live in a world of lots of accidental complexity forced on us, and it's not surprising that a lot of people are simply relieved that they can leave some of the boring and frankly exhausting footwork to collect all the necessary boilerplate to some assistant. That allows them to focus on high level understanding and the actual creative part, rather than the opposite, as OP suggests here.

  • lifeisoverforme2 hours ago
    I like learning in the pursuit of a goal. I’m greedy. If there’s no learning then I can’t care about the goal, and if there’s no goal then I can’t care about the learning. Actually I see no reason to live at all without this. The evidence is in that most of you guys value the goal above all else, so I’m just going to get outmoded. I think with that inevitability I can either make this short or I can let it be painful.
  • stokedbits4 hours ago
    I get the authors spirit of this article, but a statement like “ People who love using AI to create software are loving it because they don’t value the act of creating & understanding the software.” Just kinda comes off as insulting.

    This comes with the assumption that everyone is vibe coding. Which just isn’t the case in the professional circles I’m part of. People are steering tools and putting up guard rails to save time typing. The “creating” part of coding has very little to do with the code, in fact my perspective is that it’s the most insignificant part of creating software.

    At the end of the day, how code gets into a pr doesn’t matter. A human should be responsible to review, correct, and validate the code.

    • kranner3 hours ago
      > The “creating” part of coding has very little to do with the code, in fact my perspective is that it’s the most insignificant part of creating software. At the end of the day, how code gets into a pr doesn’t matter. A human should be responsible to review, correct, and validate the code.

      FWIW I find these statements trivializing of the craft and the passion. Some of us do like the craft of creating a massive structure that we understand from the pylons to the nuts and bolts. Reviewing AI-generated code doesn't bring us close to the understanding of the problem that comes from having solved the problem ourselves.

      If this level of detail-orientation doesn't interest you, that's fine, and it perhaps shouldn't bother you to have someone say that? We can agree these are subjective values.

      • stokedbits3 hours ago
        That’s fair and I do enjoy that part of it as well. It’s just that I think it’s trivial and I’ve been coding since about 7-8 years old and been in the industry professionally for over 20 years. My underlying point I left out is that I already understand most of the problems I’m trying to solve from a coding perspective.

        I’d much rather get into the intricacies of the business use cases, game mechanics, architectural paradigms, than to focus on typing something I’ve done dozens of times before. I think that’s where I’m at with it.

        • kranner3 hours ago
          > My underlying point I left out is that I already understand most of the problems I’m trying to solve from a coding perspective

          OK, in that case you make a fair point. I'm not averse to the typing autocompletion either. But most of the work I've been involved with has been research-oriented where the AI's offer to help solve the problem is neither welcome nor useful. So it's a different orientation altogether.

  • ribice3 hours ago
    Posted a similar thing yesterday: https://news.ycombinator.com/item?id=46719579
  • nathell4 hours ago
    This resonates with my own anxiety: https://blog.danieljanus.pl/2025/12/27/llms/
    • caleb-allen3 hours ago
      Thanks for linking that, it resonates deeply with me. It's an odd place we find ourselves
  • Havoc3 hours ago
    To each their own. I’m certainly having fun building things and iterating faster on personal tools and projects
  • agentultra3 hours ago
    If you have to review the code afterwards before you ship it because you have to take responsibility for it then you have to know how to code it yourself.

    I, personally, don't understand what it is people enjoy about using these tools. I find them tedious, boring, and they often make me angry: subtle bugs and outright lies in the output and no prompt can resolve the problem so I end up having to fix it by hand anyway. It's not pleasant for me.

    But other people do and while I don't get it I try not to yuck in their yum too much.

    There is no empathy from companies though. They don't care about code and never have.

    They should though. Every line of code is a liability. And now we have tools that can generate liability on demand faster than a team of dedicated humans who are trying to be conservative about managing that liability. You still have to be careful of course but now you're taking responsibility for what the machine generates. You're not the one driving anymore and using a tool. You're a tool being used. At least, that's how I feel about it.

    Of course to capital holders and investors this doesn't matter, so we may end up being forced to use these tools even if they're not sufficient or useful. Even if they generate liability. We're rather good, as an industry, at deflecting the consequences of liability.

  • iosovi4 hours ago
    The line "for me that just throws out the baby with the bathwater" made my day
  • tosh3 hours ago
    coding agents are fantastic for learning more about computers

    they can not only generate code but also explain code, concepts, architecture and show you stuff

    great learning tool

  • naasking3 hours ago
    > People who love using AI to create software are loving it because they don’t value the act of creating & understanding the software.

    Most people have a car just to get around, it's true, and some people love tinkering with cars and engines. The car you get around in doesn't have to be the same car as the one you tinker with.

  • pjmlp4 hours ago
    I am full in line with the sentiment expressed on the post, and can't wait for the whole must use AI bubble to implode.

    Also the whole I am more productive vibe, well management will happilly reduce team size, it has always been do more with less, and now the robots are here.

    Each day one step closer to have software development reach the factory level.

    Yes some will be left around to maintain the robots, or do the little things they aren't still yet able to perform (until they do), and a couple of managers.

    All the rest, I guess there are other domains where robots haven't yet taken over.

    I for one am happier to be closer to retirement, than hunting for junior jobs straight out of a degree, it is going to get though out there.

  • RcouF1uZ4gsC4 hours ago
    > People who love using AI to create software are loving it because they don’t value the act of creating & understanding the software.

    This is very similar to the statement - People who love using Python (or other language not C or assembly) to create software are loving it because they don’t value the act of creating & understanding the software.

    • kranner3 hours ago
      There is a category error here. Python and its standard library provide deterministic abstractions; they don't aim to solve the entire problem on your behalf.
    • christophilus3 hours ago
      C and Python are deterministic. LLMs are not. It’s not an apt comparison.
  • davydm6 hours ago
    This post brings up a lot of (imo true) points that I honestly can't share with the ai-lovers at work because they will just get in a huff. But the OP is right - we automate stuff we don't value doing, and the people automating all their code-gen have made a very clear statement about what they want to be doing - they want _results_ and don't actually care about the code (which includes ideas like testing, maintainability, consistent structure, etc).

    It's extra hilarious to hear someone you _thought_ treated their code work as a craft refer to "producing 3 weeks worth of work in the last week" because (a) I don't believe it, not for one bit, unless you are the slowest typist on earth and (b) it clearly positions them as a code _consumer_, not a code _creator_, and they're happy about it. I would not be.

    Code is my tool for solving problems. I'd rather write code than _debug_ code - which is what code-gen-bound people are destined to do, all day long. I'd rather not waste the time on a spec sheet to convince the llm to lean a little towards what I want.

    Where I've found LLMs useful is in documentation queries, BUT (and it's quite a big BUT) they're only any good at this when the documentation is unchanging. Try ask it questions about nuances of the new extension syntax in c# between dotnet 8 and dotnet 10 - I just had to correct it twice in the same session, on the same topic, because it confidently told me stuff that would not compile. Or in the case of elasticsearch client documentation - the REST side has remained fairly constant, but if you want help with the latest C# library, you have to remind it all the time of the fact - not because it doesn't have any information on the latest stuff, but because it consistently conflates old docs with new libraries. An attempt to upgrade a project from webpack4 to webpack5 had the same problems - the llm confidently telling me to do "X", which would not work in webpack 5. And the real kicker is that if you can prove the LLM wrong (eg respond with "you're wrong, that does not compile"), it will try again, and get closer - but, as in the case with C# extension methods, I had to push on this twice to get to the truth.

    Now, if they can't reliably get the correct context when querying documentation, why would I think they could get it right when writing code? At the very best, I'll get a copy-pasta of someone else's trash, and learn nothing. At the worst, I'll spin for days, unless I skill up past the level of the LLM and correct it. Not to mention that the bug rate in suggested code that I've seen is well over 80% (I've had a few positive results, but a lot of the time, if it builds, it has subtle (or flagrant!) bugs - and, as I say, I'd rather _write_ code than _debug_ someone else's shitty code. By far.

    • w4yai4 hours ago

        > we automate stuff we don't value doing, and the people automating all their code-gen have made a very clear statement about what they want to be doing - they want _results_ and don't actually care about the code (which includes ideas like testing, maintainability, consistent structure, etc)
      
      Not necessarily. I sometimes have a very clear vision of what I want to build, all the architecture, design choices, etc. It's simply easier to formalize a detailed design/spec document + code review if everthing follow what I had in mind, than typing everything myself.

      It's like the "bucket" tool in Paint. You don't always need to click pixel by pixel if you already know what you want to fill.

      • layer84 hours ago
        I don’t think the analogy holds, because the result of a flood fill in Paint is deterministic.

        Whatever your design document/spec, there are generally a lot of ways and variations of how to implement it, and programmers like the OP do care about those.

        You don’t have Paint perform the flood fill five times and then pick the result you like the most (or dislike the least).

        • w4yai3 hours ago

            > Whatever your design document/spec, there are generally a lot of ways and variations of how to implement it, and programmers like the OP do care about those.
          
          You could make the same argument about compilers : whatever is the code you wrote, your compiler may produce assembly instructions in an undeterministic way.

          Of course, there are many ways to write the same thing, but the end performance is usually the same (assuming you know what you are doing).

          If your spec is strong enough to hold between different variations, you shouldn't need to worry about the small details.

          • layer810 minutes ago
            > You could make the same argument about compilers : whatever is the code you wrote, your compiler may produce assembly instructions in an undeterministic way.

            The difference is that the compiler is bound by formal (or quasi-formal) language semantics. In terms of language semantics, you always get precisely the same result, regardless of how the compiler implements it. When you change the source code, you can reason and predict with precision about how this will change the behavior of your compiled program. You can’t do that reasoning with AI prompts, they don’t have that level of predictability.

          • retsibsi3 hours ago
            > You could make the same argument about compilers : whatever is the code you wrote, your compiler may produce assembly instructions in an undeterministic way.

            Bit of a stretch, I think, because the compiler guarantees it will follow the language spec. The LLM will be influenced by your spec but there are no guarantees.

      • samusiam4 hours ago
        Couldn't agree more. It's also like managing a team of engineers rather than doing the coding yourself. You don't necessarily value the work less, nor do you necessarily have less technical prowess. You're just operating at a higher level.
    • ElatedOwl4 hours ago
      > This post brings up a lot of (imo true) points that I honestly can't share with the ai-lovers at work because they will just get in a huff. But the OP is right - we automate stuff we don't value doing, and the people automating all their code-gen have made a very clear statement about what they want to be doing - they want _results_ and don't actually care about the code (which includes ideas like testing, maintainability, consistent structure, etc).

      I havent run into this type yet, thankfully. As an AI lover, the architecture of the code is more important than before.

      * It’s harder to understand code you didn’t write line by line, readability is more important than it was before.

      * Code is being produced faster and with lower bars; code collapsing under its own shitty weight becomes more of a problem than it was before.

      * Tests/compiler feedback helps AI self correct its code without you having to intervene; this is, again, more important than it was before.

      All the problems I liked thinking about before AI are how I spend my time. Do I remember specific ActiveRecord syntax anymore? No. But that was always a Google search away. Do I care about what those ORM calls actually generate SQL wise and do with the planner? Yes, and in fact it’s easier to get at that information now.

  • pembrook4 hours ago
    I think we can boil down the sentiment in this article to roughly this pattern:

    - Person learns how to do desirable hard thing

    - Person forms identity around their ability to do hard thing

    - Hard thing is so desired that people work to make it easier

    - Hard thing becomes easy, lowering the bar so everybody can do it (democratization)

    - Good for society, but individual person feels their identity, value and uniqueness in the world challenged. Sour grapes, cope, and bitterness follow.

    The key is to not form your identity around "Thing." And if you have done so, now is the time to broaden this identity and become a more well rounded individual instead of getting bitter.

    You should form your identity around more lasting/important things like your values, your character, your family, your community, and the general fact that you can provide value to those around you in many ways.

  • mexicocitinluez4 hours ago
    I think these types of articles miss the point. It's not about not loving what I do, or not being interested in problem solving. It's about time.

    For instance, I use a React form library called Formik. It's outdated and hasn't seen a real commit in years. The industry has moved onto other form libraries, but I really like Formik's api and have built quite a bit of functionality around it. And while I don't necessarily need a library to be actively worked on to use it, in this instance, it's lack of updates have caused it to fall behind in terms of performance and new React features.

    The issue is that I'm also building a large, complex project and spend 80-90% of my waking time on that. So what do I do? Do I just accept it and move on? Take the time to migrate to a form library that very well be out-of-date in a year when React releases again? Or, do I experiment with Claude by asking it to update Formik to using the latest React 19 features? Long story short, I did the latter and have a new version of Formik running in my app. And during that, I got to watch and ask Claude what updates it was making and most importantly, why it was making those updates. Is it perfect? No. But it's def better than before.

    I love programming. I love building stuff. That doesn't change for me with these tools. I still spend most of my time hand-writing code. But that doesn't mean there isn't a place for this tech.

    • whilenot-dev3 hours ago
      How is the article missing your point though? For example, right in the beginning of the article:

      > I’ll likely never love a tool like Claude Code, even if I do use it, because I value the task it automates. [...] Like other technologies, AI coding tools help us automate tasks: specifically, the ones we don’t value.

      Where the article talks about value, you're talking about time [savings] - but you both actually mean the same thing: Getting a fair amount of value for the time spent.

      I also don't seem to get your React Formik example... programming isn't solely about "SemVer numbers going up", it's about designing powerful abstractions for (re-)occurring problems. Being on the consuming side of a UI form library is something different from designing its API.

      For one thing, I'm sure stable products have been build with Formik@1.0.0^ (it's at @2.4.9 currently). For a second thing, I don't think doing the manual labor of playing a smarter dependabot is as valuable as you think it is. Formik still has 3 million weekly downloads with its latest release being 2 months old, why don't you upstream your changes?

      • anthonylevine3 hours ago
        Gonna share this with r/reactjs so everyone gets a laugh. Thanks.
      • mexicocitinluez3 hours ago
        > Where the article talks about value, you're talking about time [savings] - but you both actually mean the same thing: Getting a fair amount of value for the time spent.

        This is straight from the article: "People who love using AI to create software are loving it because they don’t value the act of creating & understanding the software." How is my response that this is missing the point wrong? I have no personal feelings about AI. I don't "love" it. And I also value the act of creating and understanding software, but I don't have the time to do all of that. So, I'm failing to see what point you're making.

        > programming isn't solely about "SemVer numbers going up",

        Did you read my post at all? What on earth does this have to do with Formik using legacy API's and not being as performant as the other options?

        > it's about designing powerful abstractions for (re-)occurring problems. Being on the consuming side of a UI form library is something different from designing its API.

        Again, did you read my post at all?

        > For one thing, I'm sure stable products have been build with Formik@1.0.0^ (it's at @2.4.9 currently).

        What? What does this have to do with it being years behind current React features? Do you even use React? Don't tell me you're arguing about a React form library while not actively using React?

        > For a second thing, I don't think doing the manual labor of playing a smarter dependabot is as valuable as you think it is

        lol What?

        ? Formik still has 3 million weekly downloads with its latest release being 2 months old, why don't you upstream your changes?

        This is what happens when you think just Googling and thinking you know everything. Just a quick question, that last "release", what did it include? Actually, take this a step further, in the last 2 years, what major updates were released?

        > Formik still has 3 million weekly downloads with its latest release being 2 months old, why don't you upstream your changes

        Who said I wasn't lol? What is wrong with you? Not only have you completely misinterpreted what I've said (while not having any relevant experience in the area), you're now accusing me of things.

        What an absolutely ridiculous reply.

        https://github.com/jaredpalmer/formik/tree/v2.1.6/packages/f...

        lol "Formik just had a release" you don't know what you're talking about.

        • whilenot-dev2 hours ago
          > Who said I wasn't lol? What is wrong with you?

          Are you Copilot, github-actions[bot], or jaredpalmer himself? (ref: https://github.com/jaredpalmer/formik/graphs/contributors?fr...)

          > lol "Formik just had a release" you don't know what you're talking about.

          GitHub can be difficult to navigate, I guess you wanted to link to the release page: https://github.com/jaredpalmer/formik/releases/tag/formik%40...

          • mexicocitinluezan hour ago
            me: "I'm working on a Formik update to newest React features wtih Claude"

            you: "WHY AREN'T YOU ON THE CONTRIBUTOR LIST!>!?!?!"

            The irony in sharing that screen while it obviously shows it hasn't been maintained. lol.

            lol go ahead, look at that commit. What did it do? And what about the one prior to that? Explain to me in your own words (no AI) how Formik has kept up to date with new React features?

            > Are you Copilot, github-actions[bot], or jaredpalmer himself? (ref: https://github.com/jaredpalmer/formik/graphs/contributors?fr...)

            But since you're on the repo, go take a look at the issues and discussions log. Go search for "Is htis repo dead"? Go read any number of the 10,000 comments about forms in React on any social media site of your choosing. If Jared works at Vercel and Formik, according to you, is still in "active development", why would they use RHF?

            You're not a serious person if you think you can Google a few things and automatically understand the form ecosystem in React.