206 pointsby jannesan3 days ago49 comments
  • kassner2 days ago
    > I've never been more productive

    Maybe it’s because my approach is much closer to a Product Engineer than a Software Engineer, but code output is rarely the reason why projects that I worked on are delayed. All my productivity issues can attributed to poor specifications, or problems that someone just threw over the wall. Every time I’m blocked is because someone didn’t make a decision on something, or no one has thought further enough to see this decision was needed.

    It irks me so much when I see the managers of adjacent teams pushing for AI coding tools when the only thing the developers know about the project is what was written in the current JIRA ticket.

    • pards2 days ago
      > code output is rarely the reason why projects that I worked on are delayed

      This is very true at large enterprises. The pre-coding tasks [0] and the post-coding tasks [1] account for the majority of elapsed time that it takes for a feature to go from inception to production.

      The theory of constraints says that optimizations made to a step that's not the bottleneck will only make the actual bottleneck worse.

      AI is no match for a well-established bureaucracy.

      [0]: architecture reviews, requirements gathering, story-writing

      [1]: infrastructure, multiple phases of testing, ops docs, sign-offs

      • xen2xen12 days ago
        Interesting point, does that mean AI with favor startup or startup like places? New tools often seem to favor less established and smaller places.
      • mountainrivera day ago
        Disagree it’s normally the integration and alignment of systems that takes a long time e.g. you are forced to use X product but their missing a feature you need to wait on
      • szundi2 days ago
        [dead]
    • CM302 days ago
      Yeah, something like 95% of project issues are management and planning issues, not programming or tech ones. So often projects start out without anyone on the team researching the original problem or what their users would actually need, then hastily rejigging the whole thing to fix that midway through development.
    • inerte2 days ago
      aka https://en.wikipedia.org/wiki/No_Silver_Bullet

      And it's also interesting to think that PMs are also using AI - in my company for example we allow users to submit feedback, then there's an AI summary report sent to PMs. Which them put the report into ChatGPT with the organizational goals and the key players and previous meeting transcripts, and then they ask the AI to weave everything together into a PRD, or even a 10 slide presentation.

    • doug_durham2 days ago
      I agree with you that traditionally that is the bottleneck. Think about why poor specifications are a problem. It's a problem because software is so costly and time consuming to create. Many times the stakeholders don't know that something isn't right until they can actually use it. What if it takes 50% less time to create code? Code becomes less precious. Throwing away failed ideas isn't as big an issue. Of course it is trivially easy to think of cases where this could also lead to never shipping your code.
    • d0liver2 days ago
      I feel this. As a dev, most of my time is spent thinking and asking questions.
    • api2 days ago
      For most software jobs, knowing what to build is harder than building it.

      I’m working hard on building something right now that I’ve had several false starts on, mostly because it’s taken years for us to totally get our heads around what to build. Code output isn’t the problem.

  • hedgew3 days ago
    >Why bother playing when I knew there was an easier way to win? This is the exact same feeling I’m left with after a few days of using Claude Code. I don’t enjoy using the tool as much as I enjoy writing code.

    My experience has been the opposite. I've enjoyed working on hobby projects more than ever, because so many of the boring and often blocking aspects of programming are sped up. You get to focus more on higher level choices and overall design and code quality, rather than searching specific usages of libraries or applying other minutiae. Learning is accelerated and the loop of making choices and seeing code generated for them, is a bit addictive.

    I'm mostly worried that it might not take long for me to be a hindrance in the loop more than anything. For now I still have better overall design sense than AI, but it's already much better than I am at producing code for many common tasks. If AI develops more overall insight and sense, and the ability to handle larger code bases, it's not hard to imagine a world where I no longer even look at or know what code is written.

    • siffin2 days ago
      Everyone has different objective and subjective experiences, and I suspect some form of selection will promote those who more often feel excited and relieved by using AI than those who feel it more often a negative, like it challenges some core aspect of self.

      It might challenge us, and maybe those of us who feel challenged in that way need to rise to it, for there are always harder problems to solve

      If this new tool seems to make things so easy it's like "cheating", then make the game harder. Can't cheat reality.

      • palata2 days ago
        Without AI, I have been in a company where the general mentality was to "ship bad software but quickly". Without going into the debate of whether it was profitable in the long term or not (spoiler: it was not), my problem was the following:

        I would try to build something "good" (not "perfect", just "good", like modular or future-proof or just not downright malpractice). But while I was doing this, others would build crap. They would do it so fast I couldn't keep up. So they would "solve" the problems much faster. Except that over the years, they just accumulated legacy and had to redo stuff over and over again (at some point you can't throw crap on top of crap, so you just rebuild from scratch and start with new crap, right?).

        All that to say, I don't think that AIs will help with that. If anything, AIs will help more people behave like this and produce a lot of crap very quickly.

  • palata2 days ago
    The calculator made it less important to be relatively good with arithmetic. Many people just cannot add or subtract two numbers without one. And it feels like they lose intuition, somehow: if numbers don't "speak" to you at all, can you ever realize that 17 is roughly a third of 50? The only way you realise it with a calculator is if you actually look for it. Whereas if you can count, it just appears to you.

    Similar with GPS and navigation. When you read a map, you learn how to localise yourself based on landmarks you see. You tend to get an understanding of where you are, where you want to go and how to go there. But if you follow the navigation system that tells you "turn right", "continue straight", "turn right", then again you lose intuition. I have seen people following their navigation system around two blocks to finally end up right next to where they started. The navigation system was inefficient, and with some intuition they could have said "oh actually it's right behind us, this navigation is bad".

    Back to coding: if you have a deep understanding of your codebases and dependencies, you may end up finding that you could actually extract some part of one codebase into a library and reuse it in another codebase. Or that instead of writing a complex task in your codebase, you could contribute a patch to a dependency and it would make it much simpler (e.g. because the dependency already has this logic internally and you could just expose it instead of rewriting it). But it requires an understanding of those dependencies: do you have access to their code in the first place (either because they are open source or belong to your company)?

    Those AIs obviously help writing code. But do they help getting an understanding of the codebase to the point where you build intuition that can be leveraged to improve the project? Not sure.

    Is it necessary, though? I don't think so: the tendency is that software becomes more and more profitable by becoming worse and worse. AI may just help writing more profitable worse code, but faster. If we can screw the consumers faster and get more money from them, that's a win, I guess.

    • nthingtohide2 days ago
      > Back to coding: if you have a deep understanding of your codebases and dependencies, you may end up finding that you could actually extract some part of one codebase into a library and reuse it in another codebase.

      I understand the point you are making. But what makes you think refactoring won't be AI's forte. Maybe you could explicitly ask for it. Maybe you could ask it to minify while being human-understandable and that will achieve the refactoring objectives you have in mind.

      • palata2 days ago
        I don't get why you're being downvoted here.

        I don't know that AI won't be able to do that, just like I don't know that AGI won't be a thing.

        It just feels like it's harder to have the AI detect your dependencies, maybe browse the web for the sources (?) and offer to make a contribution upstream. Or would you envision downloading all the sources of all the dependencies (transitive included) and telling the AI where to find them? And to give it access to all the private repositories of your company?

        And then, upstreaming something is a bit "strategic", I would say: you have to be able to say "I think it makes sense to have this logic in the dependency instead of in my project". Not sure if AIs can do that at all.

        To me, it feels like it's at the same level of abstraction as something like "I will go with CMake because my coworkers are familiar with it", or "I will use C++ instead of Rust because the community in this field is bigger". Does an AI know that?

        • fragmedea day ago
          With Google announcing that they'll let customers run Gemini in their own datacenters, the privacy issue goes away. I'd love it if there was an AI trained on my work's proprietary code.
      • fallingknife2 days ago
        Perhaps it will, but right now I find it much better at generating code from scratch than refactoring.
  • vertnerd2 days ago
    I'm a little older now, over 60. I'm writing a spaceflight simulator for fun and (possible) profit. From game assets to coding, it seems like AI could help. But every time I try it out, I just end up feeling drained by the process of guiding it to good outcomes. It's like I have an assistant to work for me, who gets to have all the fun, but needs constant hand holding and guidance. It isn't fun at all, and for me, coding and designing a system architecture is tremendously satisfying.

    I also have a large collection of handwritten family letters going back over 100 years. I've scanned many of them, but I want to transcribe them to text. The job is daunting, so I ran them through some GPT apps for handwriting recognition. GPT did an astonishing job and at first blush, I thought the problem was solved. But on deeper inspection I found that while the transcriptions sounded reasonable and accurate, significant portions were hallucinated or missing. Ok, I said, I just have to review each transcription for accuracy. Well, reading two documents side by side while looking for errors is much more draining than just reading the original letter and typing it in. I'm a very fast typist and the process doesn't take long. Plus, I get to read every letter from beginning to end while I'm working. It's fun.

    So after several years of periodically experimenting with the latest LLM tools, I still haven't found a use for them in my personal life and hobbies. I'm not sure what the future world of engineering and art will look like, but I suspect it will be very different.

    My wife spins wool to make yarn, then knits it into clothing. She doesn't worry much about how the clothing is styled because it's the physical process of working intimately with her hands and the raw materials that she finds satisfying. She is staying close to the fundamental process of building clothing. Now that there are machines for manufacturing fibers, fabrics and garments, her skill isn't required, but our society has grown dependent on the machines and the infrastructure needed to keep them operating. We would be helpless and naked if those were lost.

    Likewise, with LLM coding, developers will no longer develop the skills needed to design or "architect" complex information processing systems, just as no one bothers to learn assembly language anymore. But those are things that someone or something must still know about. Relegating that essential role to a LLM seems like a risky move for the future of our technological civilization.

    • palata2 days ago
      I can relate to that.

      Personally, right now I find it difficult to imagine saying "I made this" if I got an AI to generate all the code of a project. If I go to a bookstore, ask for some kind of book ("I want it to be with a hard cover, and talk about X, and be written in language Y, ..."), I don't think that at the end I will feel like I "made the book". I merely chose it, someone else made it (actually it's multiple jobs, between whoever wrote it and whoever actually printed and distributed it).

      Now if I can describe a program to an AI and it results in a functioning program, can I say that I made it?

      Of course it's more efficient to use knitting machines, but if I actually knit a piece of clothing, then I can say I made it. And that's what I like: I like to make things.

      • 6510a day ago
        I accidentally questioned out loud if the daughter created the video. I assure you, you've made it! If you bring into existence the proverbial PalataOS in a 6 word prompt we should blame and praise you for it.
    • thwarted2 days ago
      Editing and proofreading, of code and prose, are work themselves, which is often not appreciated enough to be recognized as work, and I think this is the basis for the perspective that if you can get the LLMs to do the coding/writing and all you need to do is just proof the result as if that's somehow easier because proofing is not the real work.
  • OgsyedIE3 days ago
    I think this particular anxiety was explored rather well in the anonymous short story 'The End of Creative Scarcity':

    https://www.fictionpress.com/s/3353977/1/The-End-of-Creative...

    Some existential objections occur; how sure are we that there isn't an infinite regress of ever deeper games to explore? Can we claim that every game has an enjoyment-nullifying hack yet to discover with no exceptions? If pampered pet animals don't appear to experience the boredom we anticipate is coming for us, is the expectation completely wrong?

    • nemo16182 days ago
      Thank you for sharing this :)
    • bogrollben2 days ago
      This was great - thank you!
    • 01HNNWZ0MV43FF3 days ago
      Loved it, thank you for sharing
    • zem2 days ago
      thanks, that was wonderful
  • xg152 days ago
    As far as hobby projects are concerned, I'd agree: A bit more "thinking like your boss" could be helpful. You can now focus more on the things you want your project be able to do instead of the specific details of its code structure. (In the end, nothing keeps you from still manually writing/editing parts of the code if you want some things specifically done in a certain way. There are also projects where the code structure legitimately is the feature, I.e. if you want to explore some new style of API or architecture design for its own sake)

    The one part that I believe will still be essential is understanding the code. It's one thing to use Claude as a (self-driving) car, where you delegate the actual driving but still understand the roads being taken. (Both for learning and for validating that the route is in fact correct)

    It's another thing to treat it like a teleporter, where you tell it a destination and then are magically beamed to a location that sort of looks like that destination, with no way to understand how you got there or if this is really the right place.

  • davidanekstein2 days ago
    I think AI is posing a challenge to people like the person in TFA because programming is their hobby and one that they’re good at. They aren’t used to knowing someone or something can do it better and knowing that now makes them wonder what the point is. I argue that amateur artists and musicians have dealt with this feeling of “someone can always do it better” for a very long time. You can have fun while knowing someone else can make it better than you, faster, without as much struggle. Programmers aren’t as used to this feeling because, even though we know people like John Carmack exist, it doesn’t fly in your face quite like a beautiful live performace or painted masterpiece does. Learning to enjoy your own process is what I think is key to continuing what you love. Or, use it as an opportunity to try something else — but you’ll eventually discover the same thing no matter what you do. It’s very rare to be the best at something.
    • palata2 days ago
      > can make it better than you, faster, without as much struggle

      Still need to prove that AI-generated code is "better", though.

      "More profitable", in a world where software generally becomes worse (for the consumers) and more profitable (for the companies), sure.

      • doug_durham2 days ago
        I don't see that as a likely outcome. I think it will make software better for consumers. There can be more bespoke interfaces instead of making consumers cram in to the solution space dictated by the expensive to change software as it is today.
        • palataa day ago
          That doesn't make sense: they could already spend more resources to make the software better, but they don't, because that is more profitable.

          If AI makes doing the same thing cheaper, why would they suddenly say "actually instead of increasing our profit, we will invest it into better software"?

    • dbalatero2 days ago
      I'm both relatively experienced as a musician and software engineer so I kinda see both sides. If musicians want to get better, they have to go to the practice room and work. There's a satisfaction to doing this work and coming out the other side with that hard-won growth.

      Prior to AI, this was also true with software engineering. Now, at least for the time being, programmers can increase productivity and output, which seems good on the surface. However, with AI, one trades the hard work and brain cells created by actively practicing and struggling with craft for this productivity gain. In the long run, is this worth it?

      To me, this is the bummer.

  • mjburgess3 days ago
    All articles of this class, whether positive or negative, begin "I was working on a hobby project" or some variation thereof.

    The purpose of hobbies is to be a hobby, archetypical tech projects are about self-mastery. You cannot improve your mastery with a "tool" that robs you of most of the minor and major creative and technical decisions of the task. Building IKEA furniture will not make you a better carpenter.

    Why be a better carpenter? Because software engineering is not about hobby projects. It's about research and development at the fringes of a business (, orgs, projects...) requirements -- to evolve their software towards solving them.

    Carpentry ("programming craft") will always (modulo 100+ years) be essential here. Powertools do not reduce the essential craft, they increase the "time to craft being required" -- they mean we run into walls of required expertise faster.

    AI as applied to non-hobby projects -- R&D programming in the large -- where requirements aren't specified already as prior art programs (of func & non-func variety, etc.) ---- just accelerates the time to hitting the wall where you're going to shoot yourself in the foot if you're not an expert.

    I have not seen a single take by an experienced software engineer have a "sky is falling" take, ie., those operating at typical "in the large" programming scales, in typical R&D projects (revision to legacy, or greenfield -- just reqs are new).

    • mnky9800n3 days ago
      I think it also misses the way you can automate non-trivial tasks. For example, I am working on a project where there is tens of thousands of different data sets each with their own meta data and structure but the underlying data is mostly the same. But because the meta data and structure are all different, it’s really impossible to combine all this data into one big data set without a team of engineers going through each data set and meticulously restructuring and conforming said metadata to a new monolithic schema. However I don’t have any money to hire that team of engineers. But I can massage LLMs to do that work for me. These are ideal tasks for AI type algorithms to solve. It makes me quite excited for the future as many of these kind of tasks could be given to ai agents that would otherwise be impossible to do yourself.
      • MattJ1003 days ago
        I agree, but only for situations where the probabilistic nature is acceptable. It would be the same if you had a large team of humans doing the same work. Inevitably misclassifications would occur on an ongoing basis.

        Compare this to the situation where you have a team develop schemas for your datasets which can be tested and verified, and fixed in the event of errors. You can't really "fix" an LLM or human agent in that way.

        So I feel like traditionally computing excelled at many tasks that humans couldn't do - computers are crazy fast and don't make mistakes, as a rule. LLMs remove this speed and accuracy, becoming something more like scalable humans (their "intelligence" is debateable, but possibly a moving target - I've yet to see an LLM that I would trust more than a very junior developer). LLMs (and ML generally) will always have higher error margins, it's how they can do what they do.

        • mnky9800n3 days ago
          Yes but i see it as multiple steps. Like perhaps the llm solution has some probabilistic issues that only get you 80% of the way there. But that probably already has given you some ideas how to better solve the problem. And this case the problem is somewhat intractable because of the size and complexity of the way the data is stored. So like in my example the first step is LLMs but the second step would be to use what they do as structure for building a deterministic pipeline. This is because the problem isn’t that there are ten thousand different meta data, but that the structure of those metadata are diffuse. The llm solution will first help identify the main points of what needs to be conformed to the monolithic schema. Then I will build more production ready and deterministic pipelines. At least that is the plan. I’ll write a substack about it eventually if this plan works haha.
      • xg152 days ago
        I'm reminded of the game Factorio: Essentially the entire game loop is "Do a thing manually, then automate it, then do the higher-level thing the automation enables you to do manually, then automate that, etc etc"

        So if you want to translate that, there is value in doing a processing step manually to learn how it works - but when you understood that, automation can actually benefit you, because only then are you even able to do larger, higher-level processing steps "manually", that would take an infeasible amount of time and energy otherwise.

        Where I'd agree though is that you should never lose the basic understanding and transparency of the lower-level steps if you can avoid that in any way.

    • skerit3 days ago
      I've used Claude-Code & Roo-Code plenty of times with my hobby projects.

      I understand what the article means, but sometimes I've got the broad scopes of a feature in my head, and I just want it to work. Sometimes programming isn't like "solving a puzzle", sometimes it's just a huge grind. And if I can let an LLM do it 10 times faster, I'm quite happy with that.

      I've always had to fix up the code one way or another though. And most of the times, the code is quite bad (even from Claude Sonnet 3.7 or Gemini Pro 2.5), but it _did_ point me in the right direction.

      About the cost: I'm only using Gemini Pro 2.5 Experimental the past few weeks. I get to retry things so many times for free, it's great. But if I had to actually pay for all the millions upon millions of used tokens, it would have cost me *a lot* of money, and I don't want to pay that. (Though I think token usage can be improved a lot, tools like Roo-Code seem very wasteful on that front)

    • fhd23 days ago
      > I have not seen a single take by an experienced software engineer have a "sky is falling" take,

      Let me save everybody some time:

      1. They're not saying it because they don't want to think of themselves as obsolete.

      2. You're not using AI right, programmers who do will take your job.

      3. What model/version/prompt did you use? Works For Me.

      But seriously: It does not matter _that_ much what experienced engineers think. If the end result looks good enough for laymen and there's no short term negative outcomes, the most idiotic things can build up steam for a long time. There is usually an inevitable correction, but it can take decades. I personally accept that, the world is a bit mad sometimes, but we deal with it.

      My personal opinion is pretty chill: I don't know if what I can do will still be needed n years from now. It might be that I need to change my approach, learn something new, or whatever. But I don't spend all that much time worrying about what was, or what will be. I have problems to solve right now, and I solve them with the best options available to me right now.

      People spending their days solving problems probably generally don't have much time to create science fiction.

      • mjburgess3 days ago
        > You're not using AI right

        I use AI heavily, it's my field.

        • fhd23 days ago
          The part before "But seriously" was sarcasm. I find it very odd to assume that a professional developer (even if it's not what they would describe as their field) is using it wrong. But it's a pretty standard reply to measured comments about LLMs.
          • roenxi2 days ago
            > I find it very odd to assume that a professional developer (even if it's not what they would describe as their field) is using it wrong.

            They're encountering a type of tool they haven't met before and haven't been trained to use. The default assumption is they are probably using it wrong. There isn't any reason to assume they're using it right - doing things wrong is the default state of humans.

  • exfalso3 days ago
    I'm more and more confident I must be doing something wrong. I (re)tried using Claude about a month ago and I simply stopped using it after about two weeks because on one hand productivity did not increase(perhaps even decreased), but on the other hand it made me angry because of the time wasted on its mistakes. I was also mostly using it on Rust code, so I'm even more surprised about the article. What am I doing wrong? I've been mostly using the chat functionality and auto-complete, is there some kind of secret feature I'm missing?
    • creata2 days ago
      I'd love to watch a video of someone using these tools well, because I am not getting much out of it. They save some time, sometimes, but they're nowhere near the 5x boost that some people claim.
      • qingcharles2 days ago
        I don't know what everyone is doing. Mine is like a 10X-100X force multiplier. I enjoy coding enormously more now that all the drudgery is removed.

        And I might not be the best coder, by far, but I've got over 40 years experience at this crap in practically every language going.

      • fragmedea day ago
        https://youtu.be/5k2-NOh2tk0

        We can quibble about the exact number; 1.2x vs 5x vs 10x, but there's clearly something there.

    • 3 days ago
      undefined
  • whiplash4512 days ago
    The thing is: the industry does not need people who are good at (or enjoy) programming, it needs people who are good at (and enjoy) generating value for customers through code.

    So the OP was in a bad place without Claude anyways (in industry at least).

    This realization is the true bitter one for many engineers.

    • blackbear_2 days ago
      Productivity at work is well correlated with enjoyment of work, so the industry better look for people who enjoy programming.

      The realization that productive workers aren't just replaceable cogs in the machine is also a bitter lesson for businessmen.

      • xg152 days ago
        I think the lifelong dream of many businesspeople is to create the perfect "cog in the machine" or ideally run a business without workers at all. (Tony Stark, Elon Musk's role model, is a good example of that. As far as the movies are concerned, he builds all his most important inventions himself, or with the help of AI, no workers involved)

        Independent of what AI can do today, I suspect this was a reason why so many resources were poured into its development in the first place. Because this was the ultimate vision behind it.

        • 2 days ago
          undefined
        • eru2 days ago
          You say it like it's a bad thing.
          • xg1513 hours ago
            I do believe it's a bad thing, for a number of general reasons. But as far as the US specifically is concerned, I think a society can pick one out of the following two:

            (1) Define people's worth through labour.

            (2) See labour as a cost center that should be eliminated wherever possible.

            US politicians and technologists are trying to have it both ways: Oppose a social safety net out of principle as to "not encourage leechers", forcing people to work, but at the same time seek to reduce the opportunities for work as much as possible. AI is the latest and potentially most far-reaching implementation of that.

            This is asking for trouble.

          • npodbielski2 days ago
            Of course, humans are social beings if technology 'allows' you to be antisocial, are you still being human?
      • >so the industry better look for people who enjoy programming

        Why? Both AI and outsourcing provide a much cheaper way to get programming done. Why would you pay someone 100k because he likes doing what an AI or an Indian dev Team can do for much cheaper?

    • xg152 days ago
      > generating value for customers through code.

      Generating value for the shareholders and/or investors, not the customers. I suspect this is the next bitter lesson for developers.

      • whiplash4512 days ago
        Investors don’t make money if the customers don’t
      • 2 days ago
        undefined
      • keybored2 days ago
        Yes, there you go. The users are just a propaganda proxy.

        The bitter lesson is that making profit is the only directive.

        • disgruntledphd22 days ago
          I find it odd that this was ever forgotten.
          • xen2xen12 days ago
            People like to see everything as self expression. In reality, a job is a job, and you're there to make money for someone else.
    • Writing software will never again be a skill worth 100k a year.

      I am sure Software developers are here to stay, but nobody who just writes software is worth anywhere close to 100k a year. Either AI or outsourcing is making sure of that.

    • jannesan2 days ago
      That’s a good point. I do think there still is some space to focus on just the coding as an engineer, but with AI the space is getting smaller.
    • 2 days ago
      undefined
  • xg152 days ago
    A question that came up in discussions recently and that I found interesting: How will new APIs, libraries or tooling be introduced in the future?

    The models all have their specific innate knowledge of the programming ecosystem from the point in time where their last training data was collected. However, unlike humans, they cannot update that knowledge unless a new finetuning is performed - and even then, they can only learn about new libraries that are already in widespread use.

    So if everyone now shifts to Vibe Coding, will this now mean that software ecosystems effectively become frozen? New libraries cannot gain popularity because AIs won't use them in code and AIs won't start to use them because they aren't popular.

    • benoau2 days ago
      I guess the counter-question is does it matter if nobody is building tools optimized for humans, when humans aren't being paid to write software?

      I saw a submission earlier today that really illustrated perfectly why AI is eating people who write code:

      > You could spend a day debating your architecture: slices, layers, shapes, vegetables, or smalltalk. You could spend several days eliminating the biggest risks by building proofs-of-concept to eliminate unknowns. You could spend a week figuring out how you’ll store, search, and cache data and which third–party integrations you’ll need.

      $5k/person/week to have an informed opinion of how to store your data! AI going to look at the billion times we already asked these questions and make an instant decision and the really, really important part is it doesn't really matter what we choose anyway because there are dozens of right answers.

    • mckn1ght2 days ago
      There will still be people who care to go deeper and learn what an API is and how to design a good one. They will be able to build the services and clients faster and go deeper using AI code assistants.

      And then, yes, you’ll have the legions of vibe coders living in Plato’s cave and churning out tinker toys.

      • fragmedea day ago
        That’s it then isn’t it? We are at the level where we’re making tinker toys. What is the tinker toy industry like? Instead of expensive start up Google office. Do I at least get a workshop in the back of the garden? How much does it pay?
    • mike_hearn2 days ago
      It's not an issue. Claude routinely uses internal APIs and frameworks on one of my projects that aren't public. The context windows are big enough now that it can learn from a mix of summarized docs and surrounding examples and get it nearly right, nearly all the time.

      There is an interesting aspect to this whereby there's maybe more incentive to open source stuff now just to get usage examples in the training set. But if context windows keep expanding it may also just not matter.

      The trick is to have good docs. If you don't then step one is to work with the model to write some. It can then write its own summaries based on what it found 'surprising' and those can be loaded into the context when needed.

    • c7b2 days ago
      Not sure this is going to be a big issue practice. Tools like ChatGPT regularly get new knowledge cutoffs and those seem to work well in my experience. I haven't tested it with programming features specifically, but you could simply do a small experiment: take the tool of your choice and a programming feature that was introduced after it first launched and see whether you can get it to use it correctly.
    • fragmedea day ago
      > unless a new finetuning is performed

      That's where we're at. The LLM needs to be told about the brand new API by feeding it new docs, which just uses up tokens in its context window.

  • zkmon2 days ago
    It's not true that coding would no longer be fun because of AI. Arithmetic did not stop being fun because of calculators. Travel did not stop being fun because of cars and planes. Life did not stop being fun because of lack of old challenges.

    New challenges would come up. If calculators made the arithmetic easy, math challenges move to next higher level. If AI does all the thinking and creativity, human would move to next level. That level could be some menial work which AI can't touch. For example, navigating the complexities of legacy systems and workflows and human interactions needed to keep things working.

    • fire_lake2 days ago
      > For example, navigating the complexities of legacy systems and workflows and human interactions needed to keep things working.

      Well this sounds delightful! Glad to be free of the thinking and creativity!

      • mckn1ght2 days ago
        When you’re churning out many times more code per unit time, you had better think good and hard about how to organize it.

        Everyone wanted to be an architect. Well, here’s our chance!

    • wizzwizz42 days ago
      I find legacy systems fun because you're looking at an artefact built over the years by people. I can get a lot of insight into how a system's design and requirements changed over time, by studying legacy code. All of that will be lost, drowned in machine-generated slop, if next decade's legacy code comes out the backside of a language model.
      • ThrowawayR2a day ago
        > "All of that will be lost, drowned in machine-generated slop, if next decade's legacy code comes out the backside of a language model."

        The fun part though is that future coding LLMs will eventually be poisoned by ingesting past LLM generated slop code if unrestricted. The most valuable code bases to improve LLM quality in the future will be the ones written by humans with high quality coding skills that are not reliant or minimally reliant on LLMs, making the humans who write them more valuable.

        Think about it: A new, even better programming language is created like Sapphire on Skates or whatever. How does a LLM know how to output high quality idiomatically correct code for that hot new language? The answer is that _it doesn't_. Not until 1) somebody writes good code for that language for the LLM to absorb and 2) in a large enough quantity for patterns to emerge that the LLM can reliably identify as idiomatic.

        It'll be pretty much like the end of Asimov's "Feeling of Power" (https://en.wikipedia.org/wiki/The_Feeling_of_Power) or his almost exactly LLM relevant novella "Profession" ( https://en.wikipedia.org/wiki/Profession_(novella) ).

      • eMPee5842 days ago
        thanks to git repositories stored away in arctic tunnels our common legacy code heritage might outlast most other human artifacts.. (unless ASI choses to erase that of course)
      • mckn1ght2 days ago
        That’s fine if you find that fun, but legacy archeology is a means to an end, not an end itself.
        • wizzwizz42 days ago
          Legacy archaeology in a 60MiB codebase far easier than digging through email archives, requirements docs, and old PowerPoint files that Microsoft Office won't even open properly any more (though LibreOffice can, if you're lucky). Handwritten code actually expresses something about the requirements and design decisions, whereas AI slop buries that signal in so much noise and makes "archaeology" almost impossible.

          When insight from a long-departed dev is needed right now to explain why these rules work in this precise order, but fail when the order is changed, do you have time to git bisect to get an approximate date, then start trawling through chat logs in the hopes you'll happen to find an explanation?

          • mckn1ght2 days ago
            Code is code, yes it can be more or less spaghetti but if it compiles at all, it can be refactored.

            Having to dig through all that other crap is unfortunate. Ideally you have tests that encapsulate the specs, which are then also code. And help with said refactors.

            • wizzwizz42 days ago
              We had enough tests to know that no other rule configuration worked. Heck, we had mathematical proof (and a small pile of other documentation too obsolete or cryptic to be of use), and still, the only thing that saved the project was noticing different stylistic conventions in different parts of the source, allowing the minor monolith to be broken down into "this is the core logic" and "these are the parts of a separate feature that had to be weaved into the core logic to avoid a circular dependency somewhere else", and finally letting us see enough of the design to make some sense out of the cryptic documentation. (Turns out the XML held metadata auxiliary to the core logic, but vital to the higher-level interactive system, the proprietary binary encoding was largely a compression scheme to avoid slowing down the core logic, and the system was actually 8-bit-clean from the start – but used its own character encoding instead of UTF-8, because it used to talk to systems that weren't.)

              Test-driven development doesn't actually work. No paradigm does. Fundamentally, it all boils down to communication: and generative AI systems essentially strip away all the "non-verbal" communication channels, replacing them with the subtext equivalent of line noise. I have yet to work with anyone good enough at communicating that I can do without the side-channels.

              • Ekarosa day ago
                Makes me think that the actual horrific solution here is that every single prompt and output ever made while developing must be logged and stored. As that might be only documentation that exist for what was made.

                Actually really thinking, if I was running company allowing or promoting AI use that would be first priority. Whatever is prompted, must be stored forever.

              • mckn1ght2 days ago
                > generative AI systems essentially strip away all the "non-verbal" communication channels

                This is a human problem, not a technological one.

                You can still have all your aforementioned broken powerpoints etc and use AI to help write code you would’ve previously written simply by hand.

                If your processes are broken enough to create unmaintainable software, they will do so regardless of how code pops into existence. AI just speeds it up either way.

                • wizzwizz42 days ago
                  The software wasn't unmaintainable. The PowerPoints etc were artefacts of a time when everyone involved understood some implicit context, within which the documentation was clear (not cryptic) and current (not obsolete). The only traces of that context we had, outside the documentation, were minor decisions made while writing the program: "what mindset makes this choice more likely?", "in what directions was this originally designed to extend?", etc.

                  Personally, I'm in the "you shouldn't leave vital context implicit" camp; but in this case, the software was originally written by "if I don't already have a doctorate, I need only request one" domain experts, and you would need an entire book to provide that context. We actually had a half-finished attempt – 12 names on the title page, a little over 200 pages long – and it helped, but chapter 3 was an introduction-for-people-who-already-know-the-topic (somehow more obscure than the context-free PowerPoints, though at least it helped us decode those), chapter 4 just had "TODO" on every chapter heading, and chapter 5 got almost to the bits we needed before trailing off with "TODO: this is hard to explain because" notes. (We're pretty sure they discussed this in more detail over email, but we didn't find it. Frankly, it's lucky we have the half-finished book at all.)

                  AI slop lacks this context. If the software had been written using genAI, there wouldn't have been the stylistic consistency to tell us we were on the right track. There wouldn't have been the conspicuous gap in naming, elevating "the current system didn't need that helper function, so they never wrote it" to a favoured hypothesis, allowing us to identify the only possible meaning of one of the words in chapter 3, and thereby learn why one of those rules we were investigating was chosen. (The helper function would've been meaningless at the time, although it does mean something in the context of a newer abstraction.) We wouldn't have been able to used a piece of debugging code from chapter 6 (modified to take advantage of the newer debug interface) to walk through the various data structures, guessing at which parts meant what using the abductive heuristic "we know it's designed deliberately, so any bits that appear redundant probably encode a meaning we don't yet understand".

                  I am very glad this system was written by humans. Sure, maybe the software would've been written faster (though I doubt it), but we wouldn't have been able to understand it after-the-fact. So we'd have had to throw it away, rediscover the basic principles, and then rewrite more-or-less the same software again – probably with errors. I would bet a large portion of my savings that that monstrosity is correct – that if it doesn't crash, it will produce the correct output – and I wouldn't be willing to bet that on anything we threw together as a replacement. (Yes, I want to rewrite the thing, but that's not a reasoned decision based on the software: it's a character trait.)

                  • mckn1ghta day ago
                    I guess I just categorically disagree that a codebase is impossible to understand without “sufficient” additional context. And I think you ascribe too much order to software written by humans that can exist in quite varied groups wrt ability, experience, style, and care.
                    • wizzwizz4a day ago
                      It was easy to understand what the code was instructing the computer to do. It was harder to understand what that meant, why it was happening, and how to change it.

                      A program to calculate payroll might be easy to understand, but unless you understand enough about finance and tax law, you can't successfully modify it. Same with an audio processing pipeline: you know it's doing something with Fourier transforms, because that's what the variable names say, but try to tweak those numbers and you'll probably destroy the sound quality. Or a pseudo-random number generator: modify that without understanding how it works, and even if your change feels better, you might completely break it. (See https://roadrunnerwmc.github.io/blog/2020/05/08/nsmb-rng.htm..., or https://redirect.invidious.io/watch?v=NUPpvoFdiUQ if you want a few more clips.)

                      I've worked with codebases written by people with varying skillsets, and the only occasions where I've been confused by the subtext have been when the code was plagiarised.

    • keybored2 days ago
      > New challenges would come up. If calculators made the arithmetic easy, math challenges move to next higher level. If AI does all the thinking and creativity, human would move to next level. That level could be some menial work which AI can't touch. For example, navigating the complexities of legacy systems and workflows and human interactions needed to keep things working.

      You’re gonna work on captcha puzzles and you’re gonna like it.

  • palata2 days ago
    I tend to think about the average code review: who actually catches tricky bugs? Who actually takes the time to fully understand the code they review? And who likes it? My feeling is that reviews are generally a "skimming through the code and checking that it looks ok from a distance".

    At least we have one person who understands it in details: the one who wrote it.

    But with AI-generated code, it feels like nobody writes it anymore: everybody reviews. Not only we don't like to review, but we don't do it well. And if you want to review it thoroughly, you may as well write it. Many open source maintainers will tell you that many times, it's faster for them to write the code than to review a PR from a stranger they don't trust.

  • whiplash4512 days ago
    The author is doing the math the wrong way. For an extra $5/day, a 3rd world country can now pay an engineer $20/day to do the job of a junior engineer in a 1st world one.

    The bitter lesson is going to be for junior engineers who see less job offers and don’t see consulting power houses eat their lunch.

    • inerte2 days ago
      Yes, my thoughts at the end of the article. If the AI coding is really good (or will be really, really good) you could give 6 figures salary + $5/d in OpenAI credits to a Bay Area developer, OR you give $5/d salary + $5/d in OpenAI credits to someone else from another country.

      That's what happened to manufacturing after all.

      • fragmedea day ago
        Thing is, manufacturing physical goods mean you have to physically move them around. Digital goods don't have that problem. Timezones are what's proving to be challenging though.
        • whiplash45121 hours ago
          100%. You can offshore "please write code doing X for me" but it's much harder to offshore "please generate value for my customers with this codebase" which is a lot closer to what software engineers actually do.

          Therefore, I do not anticipate a massive offshoring of software like what happened in manufacturing. Yet, a lot of software work can be fully specified and will be outsourced.

  • IshKebab3 days ago
    > Not only that, the generated code was high-quality, efficient, and conformed to my coding guidelines. It routinely "checked its work" by running unit tests to eliminate hallucinations and bugs.

    This seems completely out of whack with my experience of AI coding. I'm definitely in the "it's extremely useful" camp but there's no way I would describe its code as high quality and efficient. It can do simple tasks but it often gets things just completely wrong, or takes a noob-level approach (e.g. O(N) instead of O(1)).

    Is there some trick to this that I don't know? Because personally I would love it if AI could do some of the grunt work for me. I do enjoy programming but not all programming.

    • joelthelion3 days ago
      Which model and tool are you using? There's a whole spectrum of AI-assisted coding.
      • IshKebab3 days ago
        ChatGPT, Claude (both through the website), and Github Copilot (paid if it makes any difference).
        • qingcharles2 days ago
          I use the same with a sprinkling of Gemini 2.5 and Grok3.

          I find it they all make errors, but 95% of them I spot immediately by eye and either correct manually or reroll through prompting.

          The error rate has gone down in the last 6 months, though, and the efficiency of the C# code I mostly generate has gone up by an order of magnitude. I would rarely produce code that is more efficient than what AI produces now. (I have a prompt though that tells it to use all the latest platform advances and to search the web first for the latest updates that will increase the efficiency and size of the code)

        • joelthelion3 days ago
          Try Aider with Gemini 2.5.
  • AndrewKemendo2 days ago
    I had a conversation with a fellow tech founder (Running a $Bn+ val Series D robotics company currently) recently on AI assisted coding tools.

    We have both been using or integrating AI code support tools since they became available and both writing code (usually Python) for 20+ years.

    We both agree that windsurf + claude is our default IDE/Env now on. We also agree that for all future projects we think we can likely cut the number of engineers needed by 1/3rd.

    Based on what I’ve been using for the last year professionally (copilot) and on the side, I’m confident I could build faster, better and with less effort with 5 engineers and AI tools as with 10 or 15. Also communication overhead reduces by 3x which prevents slowdowns.

    So if I have a HA 5 layer stack application (fe, be, analytics, train/inference, networking/data mgt) with IPCs between them, instead of one senior and two juniors per process for a total of 15 people, I only need the 5 mid-seniors now.

  • frognumber3 days ago
    I may be old, but I had the same feeling for low-level code. I enjoyed doing things like optimizing a low-level loop in C or assembly, bootstrapping a microcontroller, or writing code for a processor which didn't have a compiler yet. Even in BASIC, I enjoyed PEEKing and POKE'ing. I enjoyed opening up a file system in a binary editor. I enjoyed optimizing how my computer draws a line.

    All this went away. I felt a loss of joy and nostalgia for it. It was bitter.

    Not bad, but bitter.

  • M4v3R3 days ago
    To me it’s the exact opposite. I was writing code for the past 20+ years and I recently realized it’s not the act of writing code I love, but the act of creating something from nothing. Over the past few months I wrote two non-trivial utility apps that otherwise I would most probably not write because I didn’t have enough time to do that, but Cursor + Claude gave me the 5x productivity boost that enabled me to do so, and I really enjoyed doing that.

    My only gripe is that the models are still pretty slow, and that discourages iteration and experimentation. I can’t wait for the day a Claude 3.5 grade model with 1000 tok/s speed releases, this will be a total game changer for me. Gemini 2.5 recently came closer, but it’s still not there.

    • float43 days ago
      For me it's a bit of both. I'm working on exciting energy software with people who have deep knowledge of the sector but only semi-decent software knowledge. Nearly every day I'm reviewing some shitty PR comprised of awful, ugly code that somehow mostly works.

      The product itself is exciting and solves a very real problem, and we have many customers who want to use it and pay for it. But damn, it hurts my soul knowing what goes on under the hood.

    • nu11ptr3 days ago
      I've kinda hit the same place. I thought I loved writing code, but I so often start projects and don't finish once the excitement of writing all the code wears off. I'm realizing it is designing and architecting that I love, and seeing that get built, not writing every line of code. I also am enjoying AI as my velocity has solidly improved.

      Another area I find very helpful is when I need to use the same technique in my code as someone from another language. No longer do I need to spend hours figuring out how they did it. I just ask an AI and have them explain it to me and then often simply translate the code.

    • hsuduebc23 days ago
      Same here. I do not usually enjoy programming as an craft but the act of building something is what is loveable experience.
      • skerit3 days ago
        The challenge I often face is having an entire _mental model_ of what I want to build already crystallized in my head, but then the realization that it will take hours of coding to actually convert that to code... That can be incredibly demotivating.
        • qingcharles2 days ago
          Exactly. It's even hard to get started sometimes.

          AI coding has removed the drudgery for me. It made coding 10X more enjoyable.

  • cardanome3 days ago
    A relative known youtuber called the primeagen has recently done a challenge sponsored by Cursor themselves where he and some friends would "vibe code" a game in a week. The results were pretty underwhelming. They would have been much faster not using generative Ai.

    Compared what you see from game jams where sometimes solo devs create whole games in just a few days it was pretty trash.

    It also tracks with my own experience. Yes, cursor quickly helps me get the first 80% done but then I spent so much time cleaning after it that I have barely saved any time in total.

    For personal projects where you don't care about code quality I can see it as a great tool. If you actual have professional standards, no. (Except maybe for unit tests, I hate writing those by hand.)

    Most of the current limitation CAN be solved by throwing even more compute at it. Absolutely. The question is will it economically make sense? Maybe if fusion becomes viable some day but currently with the end of fossil fuels and climate change? Is generative Ai worth destroying our planet for?

    At some point the energy consumption of generative AI might get so high and expensive that you might be better off just letting humans do the work.

    • sigmoid103 days ago
      I feel most people drastically underestimate game dev. The programming aspect is only one tiny part of it and even there it goes so wide (from in-game logic to rendering to physics) that it's near impossible for people who are not really deep into it to have a clue what is happening. And even if you manage to vibe-code your way through it, your game will still suck unless you have good assets - which means textures, models, animations, sounds, FX... you get it. Developing a high quality game is sort of the ultimate test for AI and if it achieves it on a scale beyond game jams we might as well accept that we have reached artificial superintelligence.
    • dinfinity3 days ago
      To be fair, the whole "vibe coding" thing is really really new stuff. It will undoubtedly take some time to optimize how to actually effectively do it.

      Recently, we've seen a lot of a shift in insight into not just diving straight into implementation, but actually spending time on careful specification, discussion and documentation either with or without an AI assistant before setting it loose to implement stuff.

      For large, existing codebases, I sincerely believe that the biggest improvements lie in using MCP and proper instructions to connect the AI assistants to spec and documentation. For new projects I would put pretty much all of that directly into the repos.

    • nyarlathotep_a day ago
      > A relative known youtuber called the primeagen has recently done a challenge sponsored by Cursor themselves where he and some friends would "vibe code" a game in a week. The results were pretty underwhelming. They would have been much faster not using generative Ai.

      I ended up watching maybe 10 minutes of these streams on two separate occasions, and he was writing code manually 90% of the time on both occasions, or yelling at LLM output.

    • BlackLotus893 days ago
      [flagged]
      • cardanome3 days ago
        I used it as an example because the event was sponsored by Cursor so I figured they had an interest in making the product look good. And they really failed at this.

        The again primeagen is pretty critical of vibe coding so it was super weird match up anyway. I guess they decided to just have some fun. Maybe advertise the vibe coding "lifestyle" more so than the technical merit of the product.

        Oh, it isn't the usual content for primeagen. He mostly reacts to other technical videos and articles and rants about his love for neovim and ziglang. He has ok takes most of the time and is actually critical of the overuse of generative Ai. But yeah, he is not a technical deep dive youtuber but more for entertainment.

  • jstummbillig3 days ago
    I don't really see it. At least the article should address why we would not assume massive price drops, market adjusted pricing and free offerings, as with all other innovation before, that all lead to wider access to better technology.

    Why would this be the exception?

    • ignoramous3 days ago
      If that happens, I can see those programmers become their age's Uber drivers (low pay, low skill, unsatisfactory, gig workforce).
  • weinzierl2 days ago
    "But I predict software development will be a lot less fun in the years to come, and that is a very bitter prediction in deed."

    Most professional software development hasn't been fun for years, mostly because of all the required ceremony around it. But it doesn't matter, for your hobby projects you can do what you want and it's up to you how much you let AI change that.

  • broken-kebab17 hours ago
    It's normal flow of things in the industry, isn't it? It used to be an important skill for a programmer to optimize constantly. Tasks like "We need to cut halfkilobyte at least!" were challenging, and satisfying puzzles. And today you open a news webpage, it takes 1.5Gib and who cares? Typing speed used to be an important skill too, and nowadays one can be a decent software developer using two fingers. Memorizing names, and parameters used to be extremely important until autocomplete, and autosuggest appeared. I can expand this list to a hundred points probably.
  • coolThingsFirst3 days ago
    Still think amazement of ai tools as harsh as it sounds signals incompetence of the user. They are useful don’t get me wrong but just today Claude wrote code that literally wouldnt run.

    Thought it’s ok to use new for object literal in JS.

  • HarHarVeryFunny2 days ago
    Coding itself can be fun, perhaps especially when one is trying to optimize in some way (faster, less memory usage, more minimal, etc), but at least for me (been S/W eng for 45+ years) I think the real satisfaction is conquering the complexity and challenges of the project, and ultimately the ability to dream about something and conjure it up to become a reality. Maybe coding itself was more fun back in the day of 8-bit micros where everything was a challenge (not enough speed or memory), but nowadays typically that is not the case - it's more about the complexity of what is being built (unless it's some boilerplate CRUD app where there is no fun or challenge at all).

    With today's AI, driven by code examples it was trained on, it seems more likely to be able to do a good job of optimization in many cases than to have gleaned the principles of conquering complexity, writing bug-free code that is easy and flexible to modify, etc. To be able to learn these "journeyman skills" an LLM would need to either have access to a large number of LARGE projects (not just Stack Overflow snippets) and/or the thought processes (typically not written down) of why certain design decisions were made for a given project.

    So, at least for time being, as a developer wielding AI as a tool, I think we can still have the satisfaction of the higher level design (which may be unwise to leave to the AI, until it is better able to reason and learn), while leaving the drudgework (& a little bit of the fun) of coding to the tool. In any case we can still have the satisfaction of dreaming something up and making it real.

  • JKCalhoun2 days ago
    > In some countries, more than 90% of the population lives on less than $5 per day. If agentic AI code generation becomes the most effective way to write high-quality code, this will create a massive barrier to entry … Don't even get me started on the green house gas emissions of data centers...

    My (naive?) assumption is that all of this will come down: the price (eventually free) and the energy costs.

    Then again, may daughters know I am Pollyanna (someone has to be).

  • gadilif3 days ago
    I can really relate to the feeling described after modifying save files to get more resources in a game, but I wonder if it's the same kind of 'cheating'. Doing better in a game has its own associsted feeling of achievement, and cheating definitely robs you of that, which to me explains why playing will be less fun. Moving faster on a side project or at work doesn't feel like the same kind of shortcut/cheat. Most of us no longer program in assembly language, and we still maintain a sense of achievement using elite languages, which naturally abstract away a lot of the details. Isn't using AI to hide away implementation details just a natural next step, where instead of lengthy error prone machine level code, you have a few modern language instructions?
    • lloeki3 days ago
      > Moving faster on a side project or at work doesn't feel like the same kind of shortcut/cheat.

      Depends whether you're in it for the endgame or the journey.

      For some the latter is a means to the former, and for others it's the other way around.

      • gadilif3 days ago
        I see your point, and tend to agree. However, at least for the time being, I see the AI tools not inherently different than refactoring tools which were available over a decade ago. It helps me move faster, and I feel like it's one more tool I need to master, so it will be useful in my toolbox.
  • >I just missed writing code.

    Even before AI really took of that was an experience many developers, including me, had. Outsourcing has taken over much of the industry. If you work in the west, there is a good probability that a large part of your work is managing remote teams, often in India or other low cost countries.

    What AI could change is either reducing the value of outsourcing or make software development so accessible that managing the outsourcing becomes unnecessary.

    Either way, I do believe that Software Developers are here to stay. They won't be writing much code in any case. A software developer in the US costs 100k a year and writing software simply will never again be worth 100k year. There are people and programs who are much cheaper.

  • skybrian2 days ago
    To put the cost into context, spending $5 a day on tools is ludicrously cheap compared to paying minimum wage, let alone a programmer’s salary. Programming is only free if you already know how to code and don’t value your time.

    Many of us do write code for fun, but that results in a skewed perspective where we don’t realize how inaccessible it is for most people. Programmers are providers of expensive professional services and only businesses that spread the costs over many customers can afford us.

    So if anything, these new tools will make some kinds of bespoke software development more accessible to people who couldn’t afford professional help before.

    Although, most people don’t need to write new code at all. Using either free software or buying off-the-shelf software (such as from an app store) works fine for most people in most situations. Personal, customized software is a niche.

    • aeonik2 days ago
      Software could be much, much cheaper if libraries were easier to use, and data formats and protocols were more open.

      So much code I have written and worked with is either CRUD or compatibility layers for un/under-documented formats.

      It's as of most of the industry are plumbers, but we are mining and fabricating the materials for the pipes, and digging trenches to and from every residence using completely different pipes and designs for every. single. connection.

      • skybriana day ago
        I think that’s too pessimistic. There are lots of successful standards. We rely on standard API’s and libraries a lot more nowadays than we used to. Some of them are pretty good. There’s been a lot of progress.

        But it takes a while because the wheel has to be reinvented many times before people give up on improving it. When a new language comes along, a lot of stuff gets reimplemented. There’s plenty of churn, but the tools do get better.

        • aeonika day ago
          Hmm, My comment wasn't a prediction, it's just an observation on my personal experience.

          I find the opportunity for improvement exciting, and I'm optimistic for the future.

          Like, statistically most software I've seen written, didn't need to be done. There were better ways, or it was already solved, and it was a knowledge or experience gap, or often a not invented here syndrome.

          The main thing that frustrates me these days, is trying to do things better doesn't generally align with the quarterly mentality.

  • oliviergg2 days ago
    For me, it’s the opposite, I had somewhat lost my love for my job as a developer between two JavaScript framework wars or wars between craftsmanship and agile. I think we now have the opportunity to return to addressing actual needs. For me, that has always been the driving force, an idea becomes a product. These agents have rekindled my desire to create things.
  • jwblackwell3 days ago
    The author is essentially arguing that fewer people will be able to build software in the future.

    That's the opposite of what's happened over the past year or two. Now many more non-technical people can (and are) building software.

    • walleeee3 days ago
      > The author is essentially arguing that fewer people will be able to build software in the future.

      Setting aside the fact that the author nowhere says this, it may in fact be plausible.

      > That's the opposite of what's happened over the past year or two. Now many more non-technical people can (and are) building software.

      Meanwhile half[0] the students supposed to be learning to build software in university will fail to learn something important because they asked Claude instead of thinking about it. (Or all the students using llms will fail to learn something half the time, etc.)

      [0]: https://www.anthropic.com/news/anthropic-education-report-ho...

      > That said, nearly half (~47%) of student-AI conversations were Direct—that is, seeking answers or content with minimal engagement.

    • wobfan3 days ago
      No, he never states this and is not true.

      The author tell his experience regarding his joy programming things and figuring stuff out. In the end he says that AI made him lose this joy, and he compares it to cheating in a game. He does not say one word about societal impact and or the amount of engineers in the future, it's what you interpreted yourself.

      • jwblackwell3 days ago
        “ In some countries, more than 90% of the population lives on less than $5 per day. If agentic AI code generation becomes the most effective way to write high-quality code, this will create a massive barrier to entry”
        • wobfan2 days ago
          > The author is essentially arguing that fewer people will be able to build software in the future.

          You comment is talking about ability to build software, vs. the article (in only a single sentence that references this topic, while the other 99% circles around something else) talks about the job market situation. If what you wanted so say "The author is arguing that people will probably have a harder time getting a job in software development", that would have been correct.

          > That's the opposite of what's happened over the past year or two. Now many more non-technical people can (and are) building software.

          You're (based on the new comment) explicitly saying that people without technical knowledge are getting jobs in software development sector. Where did you get that info from? Would be an interesting read for sure, if it's actually true.

  • gwerna day ago
    > Forty-six percent of the global population lives on less than $5 per day. In some countries, more than 90% of the population lives on less than $5 per day. If agentic AI code generation becomes the most effective way to write high-quality code, this will create a massive barrier to entry. Access to technology is already a major class and inequality problem. My bitter prediction is that these expensive frontier models will become as indispensable for software development as they are inaccessible to most of the world’s population.

    Forty-six percent of the global population has never hired a human programmer either because a good human programmer costs more than $5 a day{{citation needed}}.

    • fragmedea day ago
      How much of the global population has hired another person to do something for them directly? If I go to the store and the cashier does the transaction, I haven't hired a human. so more broadly, do most people hire other humans for jobs? that seems like a rich person thing to me in the first place.
      • gwerna day ago
        Well then - the cost of hiring a LLM compared to hiring a human is irrelevant if you are going to deny that hiring in general is irrelevant, now isn't it? So either OP is making an idiotic comparison because he is wrong and using a LLM to do programming already is several orders of magnitude cheaper than using a human to do programming and it is vastly more likely that poor people will be able to afford occasional LLM use, or it's idiotic because it's irrelevant.
  • freb3n3 days ago
    The financial barrier point is really great.

    I feel the same with a lot of points made here, but hadn't yet thought about the financial one.

    When I started out with web development that was one of the things I really loved. Anyone can just read about html, css and Javascript and get started with any kind of free to use code editor.

    Though you can still do just that, it seems like you would always drag behind the 'cool guys' using AI.

    • M4v3R3 days ago
      You still don’t need AI to write software, but investing in it will make you more productive. More money enables you to buy better tools, that was always true for any trade. My friend is a woodworker and his tools are 5-10x more expensive than what I have in my shack, but are also more precise, more reliable and easier to use. AI is the same, I would even argue it gives you a bigger productivity boost with less money (especially given that local models are getting better literally every week).
      • zwnow3 days ago
        Incredible take considering using AI robs new learners of off real learning. There is a reason lots of experienced devs are dropping it from their editors. Using AI will not make you a better dev, it simply accelerates you building a failing product faster, because ultimately you wont understand your own product. Most devs that use AI blindly trust it instead of questioning what it produces.
        • falcor843 days ago
          > Most devs that use AI blindly trust it instead of questioning what it produces.

          Without the punctuation, I first read it tautologically as "Most devs that use AI blindly, trust it instead of questioning what it produces". But even assuming you meant "Most devs that use AI, blindly trust it instead of questioning what it produces", there's still a negative feedback loop. We're still at the early experimentation phase, but if/when AI capabilities eventually settle down, people will adapt, learning when and when not they can trust the AI coder and when to take the reins - that would be the skill that people are hired for.

          Alternatively, we could be headed towards an intelligence explosion, with AI growing in capabilities until it surpasses human coders at almost all types of coding work, except perhaps for particular tasks which the AI dev could then delegate to a human.

          • zwnow3 days ago
            A dystopia in which ill look for a new career. Using AI to generate code sucks the joy out of the job.
            • tasuki3 days ago
              > A dystopia in which ill look for a new career.

              What makes you think that will be necessary?

              • zwnow3 days ago
                Because I dont want to work with AI agents? I like my work to be fun, as in "I could bear working 8 hours a day with this." I like thinking about the problems and solutions and how that translates to code. I like implementing it with my own hands. Substitute that with writing prompts and I'll look for a different career thats actually fun.
                • tasukia day ago
                  What makes you think you having a career will be a thing by that point?
                  • zwnow20 hours ago
                    Because it already is a thing? Some companies already force AI upon their devs. And weird, I still need to pay my bills so yea, I need a career to do so.
    • qingcharles2 days ago
      These platforms all feel like they are being massively subsidized right now. I'm hoping that continues and they just burn investor cash in a race to the bottom.
  • pornel3 days ago
    AI will be cheap to run.

    The hardware for AI is getting cheaper and more efficient, and the models are getting less wasteful too.

    Just a few years ago GPT-3.5 used to be a secret sauce running on the most expensive GPU racks, and now models beating it are available with open weights and run on high end consumer hardware. Few iterations down the line good-enough models will run on average hardware.

    When that Xcom game came out, filmmaking, 3D graphics, and machine learning required super expensive hardware out of reach of most people. Now you can find objectively better hardware literally in the trash.

    • cardanome3 days ago
      I wouldn't be so optimistic.

      Moore's law is withering away due to physical limitations. Energy prices go up because of the end of fossil fuels and rising climate change costs. Furthermore the global supply chain is under attack by rising geopolitical tension.

      Depending on US tariffs and how the Taiwan situation plays out and many other risks, it might be that compute will get MORE expensive in the future.

      While there is room for optimization on the generative AI front we are still have not even reached the point were generative AI is actually good at programming. We have promising toys but for real productivity we need orders of magnitude bigger models. Just look how ChatGPT 4.5 is barely economically viable already with its price per token.

      Sure if humanity survives long enough to widely employ fusion energy, it might become practical and cheap again but that will be a long and rocky road.

      • pornel3 days ago
        LLMs on GPUs have a lot of computational inefficiencies and untapped parallelism. GPUs have been designed for more diverse workloads with much smaller working sets. LLM inference is ridiculously DRAM-bound. We currently have 10×-200× too much compute available compared to the DRAM bandwidth required. Even without improvements in transistors we can get more efficient hardware for LLMs.

        The way we use LLMs is also primitive and inefficient. RAG is a hack, and in most LLM architectures the RAM cost grows quadratically with the context length, in a workload that is already DRAM-bound, on a hardware that already doesn't have enough RAM.

        > Depending on US tariffs […] end of fossil fuels […] global supply chain

        It does look pretty bleak for the US.

        OTOH China is rolling out more than a gigawatt of renewables a day, has the largest and fastest growing HVDC grid, a dominant position in battery and solar production, and all the supply chains. With the US going back to mercantilism and isolationism, China is going to have Taiwan too.

      • joshjob422 days ago
        Costs for a given amount of intelligence as measured by various benchmarks etc has been falling by 4-8x per year for a couple years, largely from smarter models from better training at a given size. I think there's still a decent amount of headroom there, and as others have mentioned dedicated inference chips are likely to be significantly cheaper than running inference on GPUs. I would expect to see Gemini Pro 2.5 levels of capability in models that cost <$1/Mtok by late next year or plausibly sooner.
      • jamil72 days ago
        I think there’s a huge amount of inefficiency all the way through the software stack due to decades of cheap energy and rapidly improving hardware. I would expect with hardware and energy constraints that we will need to look for deeper optimisations in software.
  • gtirloni2 days ago
    Sure, we can throw code over the wall faster. Is that all that matters though? Just like in poetry, prose, images, etc, AI generates average or worse code. Sure, it may do the job and if your goal is to be average, fine, you should be worried. But has anyone with deep knowledge in programming and a desire to excel actually looked at AI-generated code and thought "omg, this is a work of art. it's so perfect and maintenance will be much easier than anything I could have done! plus, it matches all the requirements from the stakeholders"?

    Don't get me wrong, it lets me be more productive sometimes but people that think the days of humans programming computers are numbered have a very rosy (and naive) view of the software engineering world, in my opinion.

  • gitfan863 days ago
    I'm not following the logic here. There are tons of free tier AI products available. That makes the world more fair for people in very poor countries not less.
    • ben_w3 days ago
      Lots of models are free, and useful even, but the best ones are not.

      I'm not sure how much RAM is on the average smartphone owned by someone earning $5/day*, but it's absolutely not going to be the half a terabyte needed for the larger models whose weights you can just download.

      It will change, but I don't know how fast.

      * I kinda expect that to be around the threshold where they will actually have a smartphone, even though the number of smartphones in the world is greater than the number of people

    • 3 days ago
      undefined
  • ineedasername16 hours ago
    Some people like to whittle wood. It’s no longer a career choice with strong prospects.

    As for: ” In some countries, more than 90% of the population lives on less than $5 per day.”

    Well, with the orders of magnitude difference already in place, this is not going to meaningfully impact that at all.

    Im not dismissing this: I’m saying that it isn’t much of a building block in thinking about all of the things AI is going to change and should be addressed as a result because it’s simply in the pile of problems labeled “was here before, will be here after”.

    And really, it ought to be thought of in the context of “can we leverage AI to help address this problem in ways we cannot do so now?”

  • anovikov3 days ago
    I can't see why it's a bitter prediction. It's an observation from all my life that boring, mind-numbing but high impact work makes the best money. Now smart people go into coding because it's a thrill, they enjoy doing it for the sake of it. Once this is no longer the case, these people will be out, and competition will become lower and there will be easier bucks to make.
  • jannesan3 days ago
    this article precisely captures what i have been thinking recently. it’s really demotivating me.
    • ben_w3 days ago
      Sounds about right, but consider also that music, painting, sculpture, theatre are all simultaneously (1) hobbies requiring great skill to master and which people dervive much joy from, and (2) are experiences that can be bought for a pittance as a download, a "print your own {thing}" shop, 3D printing etc., or YouTube.

      The bathwater of economics will surely dirty, but you don't need to throw out the baby of hobbies with it.

  • admiralrohan2 days ago
    Cost of AI coding tools may decrease in future making it more accessible for everyone. And we will all be forced to move up the value ladder.
  • Kiro3 days ago
    AI has made me love programming again. I can finally focus on the creative parts only.
    • falcor843 days ago
      I'm possibly doing it wrong, but that hasn't quite been my experience. While with vibe coding I do still get to express my creativity, my biggest role in this creative partnership still seems to be copy and pasting console error messages and screenshots back to the LLM.
  • visarga3 days ago
    We move up, down or sideways on the stack. That's the outcome. Not necessarily bad. It requires soul searching to find out new place.
  • BrenBarn2 days ago
    The idea of "breaking the game" here is similar to that expressed in this other recent post: https://news.ycombinator.com/item?id=43650656 . The focus here is a bit different though.

    > It makes economic sense, and capitalism is not sentimental.

    I find this kind of fatalism irritating. If capitalism isn't doing what we as humans want it to do, we can change it.

  • DeathArrow2 days ago
    >Why bother playing when I knew there was an easier way to win?

    >This is the exact same feeling I’m left with after a few days of using Claude Code.

    For me what matters is the end result, not the mere act of writing code. What I enjoy is solving problems and building stuff. Writing code is a part.

    I would gladly use a tool to speed up that part.

    But from my testing, unless the task is very simple and trivial, using AI isn't always a walk in the park, simple and efficient.

  • faragon2 days ago
    The main use I find for LLMs is code review and corrections following a list of criteria. It helps to detect overlooked issues.

    It is also useful for learning from independent code snippets, for e.g., learning a new API.

  • 1oooqooq2 days ago
    my ai-pilled co worker committed some code using a promise with a lambda that resolved it in a one liner, the parameter was called resolve.

    for some reason he also included a import for "resolve from dns".

    (the code didn't even need a promise there)

  • DeathArrow2 days ago
    >Will programming eventually be relegated to a hobby?

    I don't regard programming as merely the act of outputing code. Planning, architecting, having a high level overview, keeping the objective in focus also matters.

    Even if we regard programming as just writing code, we have to ask ourselves why we do it.

    We plant cereals to be able to eat. At first we used some primitive stone tools to dig the fields. Then we used bronze tools, then iron tools. Then we employed horses to plough the fields more efficiently. Then we used tractors.

    Our goal was to eat, not to plough the fields.

    Many objects are mass produced now while they were the craft of the artisans centuries ago. We still have craftsmen who enjoy doing things by hand and whose products command a big premium over mass market products.

    I don't have an issue if most of the code will be written by AI tools, provided that code is efficient and does exactly what we need. We will still have to manage and verify those tools, and to do that we will still have to understand the whole stack from the very bottom - digital gates and circuits to the highest abstractions.

    AI is just another tool in the toolbox. Some carpenters like to use very simple hand tools while other swear by the most modern ones like CNC.

  • aaron6953 days ago
    [dead]
  • 3 days ago
    undefined
  • flappyeagle2 days ago
    [flagged]