314 pointsby movis8 hours ago74 comments
  • bborud7 hours ago
    Multiple times per week I have the same conversation. It goes something like this:

      - AI will make developers irrelevant
      - Why?
      - Because LLMs can write code
      - Do you know what I do for a living?
      - Yes, write code?
      - Yes, about 2-5% of the time.  Less now.
      - But you said you are a developer?
      - I did
      - So what do you do 95-98% of the time?
      - I understand things and then apply my ability to formulate solutions
      - But I can do that!
      - So why aren't you?
    
    The developers who still think their job is about writing code will perhaps not have a job in the future. Brutal as it may sound: I'm fine with that. I'm getting old and I value my remaining time on the planet.

    Business owners who think they can do without developers because they think LLMs replace developers are fine by me too. Natural selection will take care of them in due course.

    • doug_durham5 hours ago
      This is a bit of glib answer. Most of the time is spent coding which encompasses typing, retyping, and retyping again. It also includes banging your head against the wall while trying to get one of your rewrites to work against and under-documented API.

      OP's formulation makes SWE sound like a purely noble enterprise like mathematics. It's more like an oil rig worker banging on pieces of metal with large hammers to get the drill string put together. They went in with a plan, but the reality didn't agree and they are on a tight schedule.

      • estebank5 hours ago
        Most of the time is spent figuring what the right thing to do is, not writing the implementation. Sometimes the process of writing the implementation surfaces new considerations about what the right thing is, but still, producing text to feed to a compiler is not the bulk of the work of a software engineer. It is to unearth requirements and turn them into repeatable software.
        • powvans4 hours ago
          Feels like lately most of the time is spent arguing about or at least worrying about whether or not AI is going to replace all software developers.
        • eweise15 minutes ago
          AI is pretty good at figuring out what the right thing to do is.
          • incanus7711 minutes ago
            AI is pretty good at pulling from the body of existing solutions of what the right thing to do is.
        • 7e3 hours ago
          If you’re spending time thinking and not experimenting, then it’s because experimentation is expensive. With an LLM you don’t have to try to predict a complex system in advance, experiments are so cheap to can just converge to a solution directly. None of this pontificating; it’s really not that useful anymore.
          • dasil0032 hours ago
            This is very naive and reductive thinking. Experiments have a cost, you really have to think carefully about what you are trying to learn. Even when code is cheap, traffic and time are still huge constraints, and you better make sure your hypothesis actually makes sense for your goals, because AI is more than happy to fill in the blanks with a plausible but completely wrong proposal.

            More broadly, it's well understood that experiments are not a replacement for design and UX. Google is famously great at the former and terrible at the latter. Sure the AI maxxers will say the machines are coming for all creative endeavours as well, but I'm going to need more evidence. So far, everything good I've seen come from AI still had a human at the wheel, and I don't see that changing any time soon.

          • GolfPopper3 hours ago
            And before long you have a solution that is made up of a thousand pieces of spaghetti that neither you nor anyone else understands. And when your solution becomes too brittle to use, cannot be maintained, or fails catastrophically, then what? Just hope that's someone else's problem?
            • gchamonlive2 hours ago
              Refactoring is cheap too, but you have to read your code and know when to stop and ask the agent to refactor, rewrite, adopt or change libs, fix issues presented by linters and code quality scanners, change abstractions and rethink the architecture.

              It's never been easier to replace chunks of code with sane software patterns, but you have to have a feel for those patterns. And also understand what's under the hood.

              You folks speak like the only function of the agent is to spit code and features. Get a grip and treat your deliverables with care, otherwise you only have yourself to blame, not the AI.

            • a10c2 hours ago
              That's the point. Your prototype doesn't need to be pretty. It just needs to prove that the value is there for it to be made pretty.
          • jimbokunan hour ago
            So the infinite monkeys with infinite typewriters approach.
          • antonvs2 hours ago
            > If you’re spending time thinking and not experimenting, then it’s because experimentation is expensive.

            No, because no amount of experimentation can solve many of the problems that have been solved by thinking. Even your claim about "experiments are cheap" requires thinking to decide what experiments to do. No one is generating all possible solutions that fit in X megabytes; you have to think to constrain the solution space.

      • ecocentrik4 hours ago
        Glib is called for. The amount of information asymmetry that's still on the table as vibe coders and vibe engineers and vibe doctors emerge is staggering. Professional experience is still incredibly valuable. Most software developers might spend more than 6% of their time coding but no senior developers are banging their heads for hours over typos.

        https://www.youtube.com/shorts/xBilK3gT5e0

        • roncesvalles3 hours ago
          These days nobody bangs their heads over typos.

          LLMs evaporated 90% of the "moments of despair" when you have an error and googling it isn't helping, or googling it made you realize you have to read 30min of documentation.

          Coding is a joy now. LLMs shaved off all the rough edges.

          • ncrucesan hour ago
            They created other kinds of despair.

            A year ago I would've told my boss “can't be done” about my work today. I'd tell him to get me the right person to talk to (our partner, not an alien) who could give me some insight into what the hell I'm supposed to be doing to consume their API. Or to at least explain why it is that this can't be done.

            Nowadays, I spent a couple of weeks reverse engineering their terrible ideas. Yeah, it worked. But it's a complete waste of my time, and tokens, energy, chips and RAM. And worst of all, it will lead to a terrible design.

            That will work, but will eventually colapse under its own weight, as we use our increased power to increase our sloppiness and take it a little further. Because we can manage it. For now.

          • skeeter20203 hours ago
            You can't possibly believe this, or you and me (and many others) are doing something different. LLMs have created an entire new - huge - set of bang-your-head moments, as they go off half-cocked in a million simultaneous directions, chasing their tail, or just making shit up. And since the vast majority of work is on existing - often ancient - codebases, let's find out if you feel the same way in 18 months.
            • GolfPopper3 hours ago
              LLMs are great for anyone who isn't responsible for the consequences of what they code.
            • jimbokunan hour ago
              Give them work in smaller chunks.
            • lo_zamoyski16 minutes ago
              Maybe I'm weird, but my usage has been very conservative. As in, I treat the LLM like a junior dev that I have to micromanage and handhold.

              I am terrified of allowing these things to complete tasks end-to-end with nothing intervening. Maybe that's why I don't run into many of these issues. I mostly delegate grunt work and manual tedium, not reasoning or design choices to the LLM. I may consult the LLM and ask for criticism, but there is no way I'm going to allow it to quietly make design decisions that I don't know about.

          • arcboii922 hours ago
            LLMs moved the moments of despair to PR reviews for me. It used to be that you could check on a junior dev occassionally throughout the day to make sure they're on the right track. Now you step away for 2 hours and they're raising a PR of bad code smell spaghetti and moving on to repeat their AI slopfest on the next task.

            It's getting hard to keep up with trying to teach new devs what bad code looks like. And I swear sometimes they just copy my PR comments into their AI tool to fix the mistakes without any of the learning.

            • jimbokunan hour ago
              At some point there needs to be an uncomfortable conversation about how if all they’re doing is copy pasting everything they get from you into ChatGPT, you can do it yourself for much much cheaper.
            • eeccan hour ago
              Don’t allow juniors to use AI. It’s like university exams: no programmable calculators allowed. Review assistants or senior who know what’s going on should though, it does help when used correctly
            • FrankRay78an hour ago
              Write a damn good automated review agent that runs against their PRs before even looking at them… works well for me!
              • hackeman3006 minutes ago
                I've tried this without much luck. In my experience they get too bogged down on surface things and don't have the necessary business requirements/context to understand and find actual bugs.

                How have you set yours up that works well for you?

          • ecocentrik3 hours ago
            Languages have been reporting compile and runtime errors for decades. Additionally very few senior developers don't already have their minds wired to spot typos the way copy editors spot bad punctuation. Typos were only really a problem for students.
          • kibwen3 hours ago
            > LLMs evaporated 90% of the "moments of despair"

            And then condensed an equal quantity of despair out of the ether via confident confabulations.

            • taurath2 hours ago
              Equal? No, no no no. Upper management is making PoCs that promise to solve longstanding multi year learnings of tradeoffs and solution balancing, and setting goals based on that. We are heading to a cliff and everyone is going to learn what happens when you replace already vulnerable foundation pillars with pig iron.
        • pear013 hours ago
          This is temporary. What is the SKILL.md equivalent going to be in five years? In ten? You don't already see a pattern emerging around solutions to encode that "professional experience" into the tools themselves?

          These LLMs can already incorporate our entire cultural corpus yet your "professional experience" is the threshold they won't cross?

          • datsci_est_20153 hours ago
            The word “incorporate” is doing some very heavy lifting in your assertion. These LLMs already have access to the whole corpus of architectural knowledge and software best practices, and yet they’re unable to reliably implement those best practices. Why not? Why do they often make completely unintuitive decisions, even when repeatedly prompted to ask clarifying questions?
            • pear012 hours ago
              To be clear by that and "cultural corpus" I meant their skill with natural languages. It is well known for instance that early LLMs were curiously better at composing sentences in English than doing basic math.

              Regarding such formal reasoning we have already seen marked improvement in the last year or two alone. The question is how this weighs on your prediction re their capabilities in the next two, five, ten, etc years.

              • datsci_est_20152 hours ago
                What are the properties of LLMs that have convinced you that there remains emergent complexity (e.g. the “ability” to formally reason) that we have not yet seen?
                • pear012 hours ago
                  There may be gains to be had in such emergence but that is not where I see the gains in the next five years. Those gains will be made by connecting LLMs more robustly with formal reasoning, which computers are already very good at. Continued iteration on connecting these right/left brain faculties could then lead to further emergence down the line.

                  The present notions of harnesses, structured output or looping in the LLM to some external state or sandbox be it debugger output or embedding into a runtime already show early promising results along these lines. I see no reason to believe these gains will not continue over the next five years.

                  If you have some theories in the converse in that regard I am all ears.

                  • datsci_est_2015an hour ago
                    Extraordinary claims require extraordinary evidence, not the opposite. There’s no current evidence to suggest limitless progress, or even superlinear progress with regards to compute and energy. My guess would be sub linear or even logarithmic progress vs. linear growth in compute and energy, as that’s how most physical systems behave.
                    • pear01an hour ago
                      No one said unlimited progress. Let's not revert to straw man claims.

                      If you think the potential of LLMs is overblown feel free to short the market. I don't pretend to know the future. But if I may, I don't think you are framing the debate in the correct terms. Evidence is an important facet of human affairs. So is risk. Best of luck with your predictions.

            • antonvs2 hours ago
              > Why do they often make completely unintuitive decisions

              Most likely because you haven't constrained their behavior in your prompt. You're making the assumption that they "understand" that using best practices is what you want. You have to tell them that, and tell them which practices they should use.

              • datsci_est_2015an hour ago
                They already fail consistently follow very simple and concrete instructions like “Please do not ever mock this object, always properly construct it in your tests”, so I’m not sure how they’re going to adhere to more vague and conceptual architectural paradigms. This is a problem with generative AI in general - image generation has similar limitations.
              • antihipocrat42 minutes ago
                Senior developers know what behavior to constrain.

                If incorrect LLM output is a prompt issue then demand for experienced developers will remain, and demand may actually increase as time passes.

          • ecocentrik3 hours ago
            The capacity of the person prompting it to understand is the threshold they won't cross. They can squeeze the gap as much as possible by dumbing down answers or slowly ramping up information complexity but there is a limit to comprehension.
            • pear013 hours ago
              This is an interesting answer for questions about human agency and accountability/personhood questions but I don't see how it leads to increased confidence in the role of human as SWE.

              If LLMs get good enough, one might be tempted to ask so what if most humans can't understand the output? Human civilization has by and large been a constant exercise in us collectively accomplishing more and more while individually comprehending less and less.

              Our ancestors likely understood more about hunting live game or murdering each other than we do. Most of us do not consider that a great loss. Most of us living in the modern world depend on things we don't fully comprehend. I'm just not sure how this would lead to being reassured re the human as SWE.

              • ecocentrik2 hours ago
                We don't need as many hunters because we've domesticated sources of meat. We still need ranchers, butchers... an entire supply chain to get meat to consumers. We didn't remove humans from the loop, we just created specializations.

                Software specialization might look very different in 10 years but I doubt that technically specialized humans will be completely removed from their professions. We might not be carrying bows and arrows anymore but we will be carrying the equivalent of a rope and a Stetson.

                • pear01an hour ago
                  Ranchers, butchers... and factory farms. Most meat Americans consume have had very little interaction with a person until they are being devoured on the plate.

                  I appreciate your points. I agree with you that not all "technically specialized humans will be completely removed" but let's not pretend the comparison is going from a caveman with a spear to a cowboy with a lasso. If you concede it is likely to be very different at some point calling it SWE is no longer useful.

                  I think SWEs would be better off realizing they have enjoyed a relatively extreme level of privilege, and rather than trying to hold onto it, use what time they still have to advocate for a more egalitarian society, even if that means giving up some of their gains. Otherwise speaking of farming, the mass layoffs to come when software has been disrupting blue collar jobs for decades will really be a chickens coming home to roost moment.

                • jimbokunan hour ago
                  The software specialists may be replaced entirely by subject matter experts.

                  No need for specialized commercial software, if everyone can just explain to the computer what they want in English.

              • npodbielski2 hours ago
                Do you really want to live in a world when nobody understand software that manages nuclear power plant? Or medical devices? Or financial software? Or radio transceivers firmware? Even something so boring like databases not understood could lead to disastrous effects if this would be the government database for managing people IDS. Hmm even if this would be working fine for years what would happen if bad actor would influence models to generate code if security issues? If nobody can comprehend the output how anybody would be able to think about the danger? This is even more grim then this https://www.citriniresearch.com/p/2028gic
                • pear012 hours ago
                  We live in a world with nuclear weapons. Somehow we all cope and get up every morning. I think you are missing the point - the world is already grim. It always has been. What about human affairs say in the last century alone makes you think human oversight is some panacea? The impetus for civilization was not some innate desire for financial systems or medicine. It was not having other humans murder you. The Leviathan is already here.

                  The article you shared has little to do with this. Questions of how to divide up gains technology creates are a separate question from that of the technology itself. Tbh I found what you shared so boring I could barely finish it. I already in this thread made an exhortation to support politicans who commit to erasing inequality. The idea that LLMs can only exist with inequality is nonsensical. The only thing grim about what you shared is the lack of political imagination. It's boring.

                  • jimbokunan hour ago
                    At least we have people who understand the technology underlying nuclear weapons!
                • esafakan hour ago
                  Maybe the tables will turn and people will ask, do you really want to live in a world where things aren't designed by machines (smarter than us)?
      • 011000112 hours ago
        Your answer reminds me that my biggest gripe with this site and programmer forums in general is the lack of awareness of the breadth and scale of software development. I'm curious what you work on, because it doesn't sound anything like what I work on.

        > Most of the time is spent coding which encompasses typing, retyping, and retyping again. It also includes banging your head against the wall while trying to get one of your rewrites to work against and under-documented API.

        I don't think I've experienced this to a large degree. Maybe early in my career. Most of my time now is spent formulating a solution, and time spent coding is mostly spent trying to compose my changes with the existing code in a way that is performant, reliable and meets the specifications.

      • sleight423 hours ago
        This is far more true for junior and perhaps mid-career engineers, unless you're working in an extremely well-defined problem space (* see below).

        When working as a SWE, the longer I did it (~30 years) more of my time was spent understanding the problem, the edge cases, how to handle the edge cases, how to do all of it affordable, on time, and within budget.

        That's engineering.

        What you're describing is "writing code". That's lower value than "solving the problem".

        I imagine a response, "But agile development, etc."

        Yep. Part of solving the often sometimes involves creating prototypes to determine the essential viability of the solution. But that's only part of it. Which prototypes do you write? How much time do you allocate to same before accepting it's a dead end (at least for now) and punting on it?

        That's engineering.

        Me probably coming across as a dick today? Well, I was diagnosed autistic a year ago, and I'm on extended sabbatical/unemployment (3 years now) due to autistic burnout. And masking is part of how I got the burnout.**

        * Why would someone be paying for that when there is likely someone else already doing it? Unless you're the rare person who hopes to "disrupt" the competition).

        ** has me begging the question of why I write here at all. SMH. Why do I do what I do? No idea sometimes.

        • cduzz2 hours ago
          I'm going to mix my metaphors a bit here...

          There's the saying "Any idiot can build a bridge; it takes an engineer to build a bridge that barely stands."

          To put this another way, any idiotic LLM can write code. It takes a person with domain experience to understand what code to write, rewrite, or not write.

          I've seen lots of organizations hollow out their internal competence in favor of outsourcing the skills. LLMs are the ultimate expression of that. There are people who say "you need to have people in your organization who understand how things work because they're the ones who solve problems!" and there are other people who say "focus on your core competencies! These problems you're worried about aren't your core competencies, so get rid of those experts, they're expensive and annoying; we can just sign a contract with an organization that'll know things for us."

          At some point we all will identify exactly how much "seed corn" you need for the next season. We'll figure that out because we're starving, but at least we'll all know.

        • dijksterhuisan hour ago
          you've definitely been doing this longer than i have, but our outlook and recent experiences sound very similar. also been diagnosed recently, also on similar extended sabbatical/unemployment, also come across as a dick, also trying to mask less because burn out.

          got an email address in my profile if you'd be interested in talking at some point about something, or even talking about nothing in particular. (i don't normally do this sort of HN networking stuff, i find it super cringe. but there we go).

      • pear013 hours ago
        Let's also not forget a lot of the market edge of SWEs comes in knowing how to navigate these parts. The fact you needed to be reasonably fluent in a language was already a barrier to entry which meant in better times new grads could earn six figures at their first job just for putting in that effort.

        Maybe you will still be needed. That is one question. How well you will be paid and treated when the barrier to entry is now "I can think" is another. As the parent indicates, most people doing software are not doing things akin to pure math. I don't think most SWEs want that lifestyle anyway.

        It's ok. You shouldn't fight the coming change. Instead use the time we still have to fight for more equal outcomes (vote for politicians that support UBI, Medicare for all). The longer you delude yourself that you are uniquely needed in an increasingly mechanized world the worse all our outcomes will be.

        • arandomhuman3 hours ago
          The barrier to entry to generating code may be "I can think", but the barrier to entry for solving hard, distributed/multi-faceted engineering problems still remains quite high - agents can't really do this still to a decent level of efficacy reliably.

          The progress models have made in the last 5 years aren't convincing me they'll bridge that gap too soon, although I can see how some people are convinced by how decent agentic harnesses make things. I know it's really easy to get very hyped with the current state of the technology, but try to have a bit of skepticism.

      • 5 hours ago
        undefined
      • bborud5 hours ago
        Are you, perchance, assuming that since you spend most of your time struggling with actual code, this is so for everyone else?

        Or are you saying that I'm lying. That I am secretly hammering away at my keyboard while pretending not to?

        No, writing code hasn't been how I spend most of my time for many decades now.

        • therealdrag05 hours ago
          Are you a staff level engineer that has dozens of other engineers banging away at code projects you help define?
          • eska4 hours ago
            Try to write a design doc before you implement something (which people find they need to do for LLMs to work at all anyway). You’ll find that you spend much less time actually writing code.

            Write proper API documentation laying out the assumptions and intent, generate some good API docs, write a design and architecture document (which people find they need for LLMs to work at all anyway). You’ll find that you spend a lot less time reading code.

            • dkersten4 hours ago
              > which people find they need to do for LLMs to work at all anyway

              Everything we have to do for AI to function well, would help humans to function better too.

              If you take the things for AI, but do them for humans instead, that human will easily 2x or more, and someone will actually understand the code that gets written.

              • overfeed3 hours ago
                > If you take the things for AI, but do then for humans instead, that human will easily 2x or more, and someone will actually understand the code that gets written

                This only works on high-trust teams and organizations. A lot of AI productivity gains are from SWE putting the extra effort because the results will be attributed to them. Being a force-multiplier for others isn't always recognized, instead, your perfomance will likely judged solely on the metrics directly attributed to you. I learned this lesson the hard way by being idealistic, and overestimating the level of trust that had been built after joining a new team. Companies pay lip service to software quality, no one gives a shit if your code has the lowest SEV rates.

          • bborud4 hours ago
            It has varied over the years but it isn't actually relevant since I am talking about when I write software.

            Writing code just isn't what takes time.

            • QuercusMax4 hours ago
              Getting the code into a state where it actually does what you want takes time - but a lot of that is research, testing, experimentation, documentation, etc. Those can be faster with AI assistance but you still need to bang on it enough to make sure it works right.
          • kakacik4 hours ago
            I am not, yet actual coding is miniscule part of workflow. The rest is cca un-automable by any llm - politics, meetings, discussions, brainstorming, organizing testing teams, stakeholders and so on.

            This is how big corporations look like, not some SV startups.

      • logicchains4 hours ago
        >OP's formulation makes SWE sound like a purely noble enterprise like mathematics. It's more like an oil rig worker banging on pieces of metal with large hammers to get the drill string put together.

        Those two formulations represent different developers' approaches to the same task. The former being developers who are much better at planning than the latter.

    • KronisLV7 hours ago
      > Yes, about 2-5% of the time.

      There are also those for whom that percentage is higher, let’s say 6-50%.

      > I understand things and then apply my ability to formulate solutions

      The AI is coming for that too.

      You might just be lucky to be in circumstances that value your contributions or an industry or domain that isn’t well represented in the training data, or problem spaces too complex for AI. Not everyone is, not even the majority of devs.

      People knocking out Jira tickets and writing CRUD webapps will end up with their livelihood often taken away. Or bosses will just expect more output for same/less pay, with them having to use AI to keep up.

      • geodel6 hours ago
        Agree. It is just like 2 totally separate groups are arguing.

        One very tiny slice of speciality/ rare industries where code is critical but overall small part of project costs. I can see if code / software is 5% of overall cost even heavy use of AI for code part is not moving the needle. So people in this group can feel confident in their indispensability.

        Second group is much larger and peddling CRUD / JS Frontends and other copy/paste junk. But as per industry classification they are just part of same Coder/Developer/IT Engineer group. And their bleak prospects is not some future scenario, it is playing out right now with tons of them getting laid off. And whole lot of people with IT degrees, certifications are not finding any jobs in this field.

        • marcindulak4 hours ago
          After hearing various similarly sounding opinions about CRUD being easy for LLMs, I started tracking how well LLMs handle a standard CRUD Django app I'm familiar with at https://github.com/marcindulak/learning-api-styles-gen-ai-ex....

          So far it appears that LLMs still require constant hand-holding, even for a small educational CRUD app.

          • magicalhippo2 hours ago
            We've had reasonable effectiveness for CRUD. It's mainly the UI toolkits we use, but the plumbing it can do quite well. It's not 100% vibecoding but certainly a significant accelerator for parts of the job.
        • hjort-e5 hours ago
          What makes you feel that a complex frontend would be easier for AI than a non-CRUD backend system?
          • evilduck4 hours ago
            Hubris.

            I don't mean this as a snarky jab. It's coming for anything software. I've used AI to accomplish front end development and reverse engineer proprietary USB hardware dongles in C, then rewriting the C into Rust to get easy desktop GUIs around it. Backend APIs, systems programming, embedded programming, they all seem equally threatened it's just a matter of time. Front end is easy to see in the AI web front ends but everything else is still easy pickings.

            • ThrowawayR24 minutes ago
              [delayed]
            • manmal2 hours ago
              You are describing the toy projects that had us all amazed end of last year. Large, maintainable software that can serve paying customers is in a completely different galaxy.
            • hjort-e4 hours ago
              I 100% agree it's coming for everything. I'm just curious what the arguments would be for why frontend would be easier.
              • svachalek2 hours ago
                As a manager of a full stack team, we've found AI falls short a lot more on front end. It has its weak points on both front and back, but the problems with backend are quite easy to feed back into it -- needs more performance, needs to pass this security audit, needs to deal with xyz system. The problems with frontend are more like this is ugly, it's clunky to use, people don't like it. People without years of frontend experience tend to lack the vocabulary required to get AI to fix it, period, and it ends up going around in loops.
            • skydhash4 hours ago
              > I've used AI to accomplish front end development and reverse engineer proprietary USB hardware dongles in C, then rewriting the C into Rust to get easy desktop GUIs around it. Backend

              That is not hard. It’s just tedious and very slow to do manually. The hard part would be about designing a usb dongle and ensuring that the associated software has good UX. The reason you don’t see kernel devs REing devices is not because it’s impossible or that it requires expert knowledge. It’s because it’s like counting sands on the beach.

          • geodel4 hours ago
            It is irrelevant that complex frontend would be easy for AI or not. To me 1) how many unique complex frontends are needed out of total frontends that millions of sites out there need. 2) Will there be increase in need of such frontend engineers so other displaced folks can land a job there.

            I think it will be far fewer to have any positive impact on IT engineers' overall job prospects.

            • hjort-e4 hours ago
              But that's equally true for any type of system. Frontend isn't inherently easier than other systems, so i was just wondering why you singled it out. To me AI just seems better at backends and database design
              • geodel3 hours ago
                OK, my examples seemed like biased against frontend which was not the intention.

                The thrust was overall job prospects for people in software field. It is not that frontend is easy but it is definitely easy to get into. Considering there are far more frontend developers then say C++ system engineers or database designers so in sheer numbers they will be affect more.

                • hjort-e3 hours ago
                  Ah okay that's fair. In my country boot camps aren't a thing so frontend devs are rare and good frontend devs even more, so I think it depends on where in the world you are. We got an abundance of java devs here that i fear more for
      • LPisGood6 hours ago
        > The AI is coming for that too.

        That may be true I’m not gonna say one way or the other, but if AI comes for that then almost all knowledge work is effectively dead, so all that’s left would be sales or physical labor.

        • ge965 hours ago
          I wonder though, can AI make the next JS framework. I mean that in sincerity, there was the leap from jQuery to React for ex. If an AI only knows jQuery and no one makes React, will React come out of AI.
          • ASalazarMX5 hours ago
            News: "AGI refuses to make another JS framework, rages on the follies of misguided developers and their wateful JS crutches"

            Developer community: Wow, we truly have become obsolete now!

            • ge965 hours ago
              Who will be the disrupters when there is nothing to disrupt
            • notpachet4 hours ago
              In a shocking twist, it turns out that Mootools is the agents' preferred framework
          • scj4 hours ago
            A thought experiment: When all practical software is only written by AIs, will the AIs use goto? What will the programming language of AIs look like?

            My bet is something _like_ assembly, but not assembly.

            That being said, I think humans will still program for fun. Just like we paint portraiture in a world with cameras.

            • r_lee2 hours ago
              I think it won't be like assembly, because it takes more information vs building blocks that have more dense information in them, kind of like how we use libraries and frameworks
            • ge964 hours ago
              Yeah that's my thing for my hardware projects, I'm not going to reach for an LLM to do it, I want to write the code myself/be present. For something new I would consider using LLM to generate something, like a computer vision implementation or something I don't already know. The end result I would know how it works, just for POC.
          • wiseowise3 hours ago
            It can't. Framework hierarchy is largely based on social structure, rather than pure technical merit. Otherwise React would've been displaced long time ago.
          • smrq5 hours ago
            People didn't leap from jQuery to React. It's a lot easier to imagine an AI looking at jQuery and [insert any server side MVC framework] and inventing Backbone.
        • BurningFrog5 hours ago
          The history of the last 250 years is inventing new professions as old ones are automated away.

          I expect that to continue.

          • coldtea5 hours ago
            The history of the last 250 was moving from agriculture to industrial work to service work. Now the last frontier is starting to be overtaken by automation too.

            (And in all of those transitions millions where left behind without work or with very worse prospects. The people that took the new jobs were often a different group, not people who knew the old jobs and were already in their 30s and 40s).

            And what would be the new professions that uniquely require humans, when even thinking and creative jobs are eaten by AI? Would there be a boom of demand for dancers and chefs, especially as millions lose their service jobs?

          • nitwit0054 hours ago
            Given some sort of machine with human capabilities, there would be no reason to assign that profession to a human, excepting perhaps cost.
          • charlie903 hours ago
            Like doordashing and pokemon card reselling.
            • wiseowise3 hours ago
              Don't forget OnlyFans and streaming.
          • georgemcbay4 hours ago
            > The history of the last 250 years is inventing new professions as old ones are automated away.

            Even if this still holds true ("past performance is no guarantee of future results") the part about it that people handwave away without thinking about or addressing is how awful the transitional period can be.

            The industrial revolution worked out well for the human labor force in the long term, but there were multiple generations of people who suffered through a horrendous transition (one that was only alleviated by the rise of a strong labor movement that may not be replicable in the age of AI, given how it is likely to shift the leverage of labor vs. capital).

            If you want to lean on history as an indication that massive sudden productivity changes will make things better for humanity in the long run, then fine, but then you have to acknowledge that (based on that same history) the transition could still be absolutely chaotic and awful for the lifespan of anyone who is currently alive.

      • dmazzoni6 hours ago
        There are periods of time where I might spend 80% of my time "coding", meaning I have minimal meetings and other responsibilities.

        However, even out of that 80% of my time, what fraction is actually spent "writing code"?

        AI can be an enormous accelerator for the time I'd normally spend writing lines of code by hand, but it doesn't really help with the rest of the work:

        - Understanding the problem - Waiting for the build system and tests to run - Manually testing the app to make sure it behaves as I'd like - Reviewing the diff to make sure it's clear - Uploading the PR and writing a description - Responding to reviewer feedback

        There are times when AI can do the "write the code" portion 10x faster than I could, but if it's production code that actually matters, by the time I actually review the code, I doubt it's more than 2x.

        • coldtea5 hours ago
          >AI can be an enormous accelerator for the time I'd normally spend writing lines of code by hand, but it doesn't really help with the rest of the work:

          - Understanding the problem - Waiting for the build system and tests to run - Manually testing the app to make sure it behaves as I'd like - Reviewing the diff to make sure it's clear - Uploading the PR and writing a description - Responding to reviewer feedback

          What part of those you think it doesn't help with?

          • malfist5 hours ago
            There is no shortcut to understanding. No one can understand things for you
            • Animats3 hours ago
              They can make it unnecessary for you to understand.

              Consider hash tables. Nobody implements a hash table by hand any more. I've written some, but not in this century. Optimal hash table design is a specialist subject. Do you know about robin hood algorithms? Changing the random number generator's seed to discourage collision attacks? A basic hash table starts to slow down around 70% full. Modern hash tables can get above 90% full before they have to expand.

              Who keeps Knuth's Fundamental Algorithms handy any more? I own both the original edition and the revised edition. They're boxed up in the garage. I once read that book cover to cover. That was a long time ago.

              That's not AI. That's solving the problem and putting it in a black box. That's how technology progresses.

              • malfist2 hours ago
                That's obviously not what I'm talking about. If you're asking an AI to write an optimal hash table algorithm, something is clearly wrong. I'm talking specifically about understanding the business domain and problem you are trying to solve.
      • nitwit0054 hours ago
        > The AI is coming for that too.

        Yes, but if/when that happens, it won't just affect software engineers. An AI that can do that can replace any white collar worker.

        > People knocking out Jira tickets and writing CRUD webapps will end up with their livelihood often taken away.

        I'm not sure anyone is actually working on those. People talk about spending all day writing CRUD apps here, but if you suggest there are already low code tools to build those, they will promptly tell you it's too complex for that to work.

        • laughing_man3 hours ago
          >Yes, but if/when that happens, it won't just affect software engineers. An AI that can do that can replace any white collar worker.

          Yes. Yes, that's exactly what we're going to see, and more swiftly than people are generally comfortable with. What are we going to do with all those cubicle dwellers?

      • PunchyHamster6 hours ago
        > The AI is coming for that too.

        Current AI tech giants prove over and over and over again that this is not the case

        • cromka5 hours ago
          We've literally just started, what "over and over" do you refer to?
          • malfist4 hours ago
            I've been told the past four years that AI is coming for my job. And thats just not true. Its no closer to that than it was 4 years ago.
            • Danox3 hours ago
              It is the lament of every generation of humans to think that they are the pinnacle of everything that has come before, we are just at the start of the so-called AI era, many very smart people coming up still haven’t really got their hands on all of the material available from a hardware and software standpoint. We are still at the early stages.

              I am very optimistic. I just wish I was younger to take advantage like Junior high, high school age with my current resources damn… The oldest lament in the books.

            • esafak17 minutes ago
              Ask some juniors how their job search is going. In five years, ask the seniors.
            • laughing_man3 hours ago
              I'm not sure how anyone would know if it's closer or not. There's been a lot of progress in LLMs over the last four years.
            • KronisLV3 hours ago
              > Its no closer to that than it was 4 years ago.

              There are people and companies out there releasing entire vibe coded projects and for some upwards of 80% of the code they develop is AI-assisted/generated. Since around the end of 2025 and models like Opus 4.6, the SOTA has gotten good enough to work agentically on all sorts of dev tasks with pretty good degrees of success (harnesses and how you use them still matters, ofc).

              • wiseowise3 hours ago
                > There are people and companies out there releasing entire vibe coded projects and for some upwards of 80% of the code they develop is AI-assisted/generated.

                And how much revenue do they generate?

            • kakacik4 hours ago
              It feels its just around the corner. But when you turn 20th corner and its still behind the next one, maybe things are a bit different than they seem / clueless emotions make us believe.

              Long term its bleak, but short/medium term - not so much, if I get fired it won't be llm replacing me but rather company politics, budget changes etc. Which was the only real (very real) risk for past 15 years too, consistently. But it helps to not work for US company.

          • hansmayer4 hours ago
            > We've literally just started

            5+ years in the software world is like 30 years in others...So...given lacking use-cases and humongous amounts of capital already wasted on chatbots...It's more like "we" are closer to closing curtains than to "just started"...

          • ASalazarMX5 hours ago
            Hype cycles, AI has made developers obsolete like a dozen times in the las couple of years, at least according to their developers.
          • luckystarr5 hours ago
            Discovery of the best solution in a problem space is not generative but only verificative. Meaning: the LLM can see if a solution is better than another, but it can't generate the best one from the start. If you trust it, you'll get sub-par solutions.

            This is definitely an agent problem instead of an LLM problem. Anybody got something explorative like this working?

            • coldtea5 hours ago
              So? Hundreds of millions of office and devel jobs are about for developing "optimal solutions" to begin with.
      • tjwebbnorfolk5 hours ago
        >> I understand things and then apply my ability to formulate solutions

        > The AI is coming for that too.

        If this is true, then you'd have to conclude that AI is coming for everything. I'm still not convinced by that. But I am convinced that the part of software development that involves typing code manually into an IDE all day is likely gone forever.

        • itsafarqueue5 hours ago
          > If this is true, then you'd have to conclude that AI is coming for everything.

          Now you’re getting it

        • flatline4 hours ago
          It really doesn't have to come for everything to feel like it's taking everything. If it eliminates 10% of white collar jobs over the next decade, the impact will be felt everywhere.
          • bonesss3 hours ago
            I struggle to understand the logic (in general, the way people are talking), normally efficiencies come with increases in production and scale and use-cases.

            So of 10% of lawyers get AI-d away, let’s say, the remaining 90% are 1.1x+ efficient and also up against other lawyers enjoying the same… work might go up. And on the customer side there is sooooo much BS with lawyers, but if both lawyer and customer can communicate faster or better with the LLMs, we should see more better cases with better dialog and case handling. Again, the total amount of lawyering could go up a lot. And then we have the cases prohibitive without the LLMs, now possible for big money. Better LLM empowered lawyers should be able to create new and more lawyer work.

            As it stands I see people selling services that are subsidized by VC, template jobs we’d be doing faster with copy paste but it’s not copyright infringement when OpenAI does it, and a rush for valuations to soak up VC because the business model isn’t there. I’m seeing a huge uptick in visual bugs on large commercial platforms and customer facing apps, and don’t feel OpenAI is gonna kill Office anytime soon… or Chromium… or Steam… or emacs…

            Call me an optimist, but I think those LLM pump and dumpers are creating a wave of fear that would be quite different if they weren’t lying and trying to boost an IPO. Chat GPT 2 was too dangerous to release, lul, and the class action suits are just getting started.

            An actual lawyer replacing tech company should sell lawyering for infini-money, not pens that’ll totally 10x your lawyering (bro).

            • sophacles2 hours ago
              And what do those 10% of lawyers do? Every other industry also got reduced by 10+%, its not like they have a job elsewhere.

              So.... they just starve in the streets?

              Even if some other, arguably better job comes along, would they retrain for it? (You can say yes, but take a look at the long history of people choosing to join a cult and vote for an orange moron instead of learning a new skill).

              Either you're convinced you won't be too badly affected and will gladly watch huge swaths of people suffer, or you're deluded enough to think that it will really, truly be different this time. In any case, I hope you get the worst results of what you preach.

          • tjwebbnorfolk3 hours ago
            Sure, but who doesn't think that 10% of white collar jobs are mostly bullshit anyway?
            • esafak15 minutes ago
              The only thing worse than a bullshit job is no job.
      • no_op5 hours ago
        Even if AI advances continue, for quite a while there's likely still going to be the 'Steve Jobs' role. That is, even if AI coding agents can, in the future, replace entire teams of SWEs, competently making all implementation decisions with no guidance from a tech-savvy human, the best software will likely still involve a human deciding what should be built and being very picky about how, exactly, it should externally behave.

        I don't know if it makes sense to call that person an SWE, and some people currently employed as SWEs either won't be good at this or aren't interested in doing it. But the existing pool of SWEs is probably the largest concentration of people who'll end up doing this job, because it's the largest concentration of people who've thought a lot about, and developed taste with respect to, how software should work.

        • bmiedlar3 hours ago
          This matches what I'm seeing. I've been building software for a long time, but building more now with AI than I ever could with a traditional team. But the throughput that's helpful is from knowing what to build and what tradeoffs matter. The AI doesn't have that. It's a force multiplier on experience, not a replacement for it.
        • laughing_man3 hours ago
          How many Steve Jobses do we need as a percentage of people developing software?
      • Aperocky5 hours ago
        > The AI is coming for that too.

        That's where we fundamentally disagree about.

        Yes, AI is coming for solution formulation, absolutely, but not all of it, because it is actually a statistical machine with context limit.

        Until the day LLMs are not statistical machine with a context limit, this will hold. Someone need to make something that has intent and purpose, and evidently now not by adding another 10T to the LLM parameter count.

        • bel85 hours ago
          > because it is actually a statistical machine with context limit.

          So are humans.

          Machines have surpassed humans by magnitudes in many capabilities already (how many billion multiplications can you do per second?)

          And I argue that current LLMs have surpassed many of my capabilities already.

          For example GPT/Opus can understand and document some ancient legacy project I never saw before in minutes. I would take a week+ to do the same and my report would probably have more mistakes and oversights than the one generated by the LLM.

          • KalMann3 hours ago
            > So are humans.

            AI advocates are _way_ too confident about the nature human cognition. Questions that have been debated by philosophers and cognitive scientists for decades are now "obvious" according to you people, though you never provide any argument to support your statements.

          • Aperocky4 hours ago
            We are not pre-trained using the summary of all human knowledge over all of history. Yet we make certain decisions with much more ease.

            We are much more limited, but we fundamentally work differently. Hence adding more parameter like certain companies are doing isn't necessarily going to help. We need to rethink how LLM work, or how it work in tandem with something that's completely different.

            I think it's doable, I just don't believe it's LLM, and I don't think anyone now knows what it is.

            • bel84 hours ago
              > We are not pre-trained using the summary of all human knowledge over all of history.

              But we are? That's our education system.

              The only reason school doesn't try to shove more information in our brains is because we hit bandwidth limits.

              • KalMann3 hours ago
                > But we are? That's our education system.

                That is not what the education system does. That's an obvious distortion of reality. People train over billions of documents to statistically predict the next word to gain and understanding of language. LLMs do this statistical processing in order to mimic humans natural language learning ability. And there has been continued evidence of the limitations of this approach to accurately mimic the totality of human cognition.

        • itsafarqueue5 hours ago
          Yours is a “God of the gaps” argument. You will remain technically correct (the best kind of correct!) long after the statistical machine has subsumed your practical argument, context limit and all.
          • Aperocky4 hours ago
            I fall into the "pessimistic heavy user" camp, I burn thousands of $ worth of SOTA tokens monthly but it just makes me more acutely aware of the limitation and amount of work I need to do to work around them and what kind of decision that I should reserve to myself instead of trusting the LLMs to do.
        • coldtea5 hours ago
          >but not all of it, because it is actually a statistical machine with context limit.

          And the human mind is not?

          • KalMann3 hours ago
            I can give you the exact mathematical formula used to statistically optimize the output of a neural network from input examples. Can you do the same for the brain?
          • nothinkjustai5 hours ago
            It’s not.
      • bborud6 hours ago
        > The AI is coming for that too.

        To some degree yes, in practice, not so much. In practice you have to be in the world, talk to people, know how to talk to people, know how to listen, and be able to understand the difference between what people say and what they actually need. Not want. Need.

        This was something I learnt in my very first job in the 1980s. I worked for someone who did industrial automation beyond PLCs and suchlike. He spent 6 months working in the company. On the factory floor, in the logistics department, in procurement, in accounting and even shadowing the board. Then he delivered a proposal for how to restructure the parts of the company, change manufacturing processes, and show how logistics and procurement could be optimized if you saw them as two parts in a bigger dance.

        He redesigned the company so that it could a) be automated, and b) leverage automation to increase the efficiency of several parts of the business. THEN, he started planning how to write the software (this was the 80s after all), and then we started implementing it.

        Now think about what went into this. For instance we changed a lot of what happened on the factory floor. Because my boss had actually worked it. So he knew what pain points existed. Pain points even the factory workers didn't know how to address because they didn't know that they could be addressed.

        I was naive. I thought this was how everyone approached "software projects". People generally don't. But it did teach me that the job isn't writing code. It is reasoning about complex systems that often are not even known to those who are parts of it.

        And this is for _boring_ software that requires very little creativity and mostly zero novelty. Now imagine how you do novel things.

        > People knocking out Jira tickets and writing CRUD webapps will end up with their livelihood often taken away. Or bosses will just expect more output for same/less pay, with them having to use AI to keep up.

        You make it sound like it is a bad thing that certain tasks become easier.

        I spent a lot of time writing CRUD stuff. Because the things i really want to work on depend on them. I don't enjoy what is essentially boilerplate. Who does? If you can do the same job in 1/20 the time, then how is this a bad thing?

        It is only a bad thing if writing CRUD webapps is the limit of your ability. We don't argue for banning excavators because it puts people with shovels out of work. We find more meaningful things for them to do and become more productive. New classes of work becomes low-skilled jobs.

        If you have been doing software for a while, you are probably doing some subset of this. But these things are hard to articulate. It is hard to articulate because it is not something we think about. Like walking: easy for us to do, hard to program a robot to do it.

        • coldtea5 hours ago
          >To some degree yes, in practice, not so much. In practice you have to be in the world, talk to people, know how to talk to people, know how to listen, and be able to understand the difference between what people say and what they actually need. Not want. Need.

          1 person needs to do that. The other 100 not doing that currently to begin with, but doing the AI-automatable work?

        • SoftTalker6 hours ago
          > To some degree yes, in practice, not so much.

          We used to say that (not long ago, even) about the code-writing part. Why do we believe that LLMs are going to stop there? Why do we think they won't soon be able to talk to people, listen, and determine what they need? I think it's mostly a cope.

          We have robots walking just fine now, by the way.

          • sarchertech6 hours ago
            If they can do those things they can effectively replace any white collar job. That’s about 45% of the workforce. Societies tend to collapse around 25-30% unemployment.

            Imagine 45% of higher than average paying jobs gone.

            If that happens we’ll either figure out a new economic system, or society will collapse.

            Also saying robots are walking just fine is misleading for any definition of just fine that is anywhere near as good as a human.

            • BurningFrog5 hours ago
              It's important (and calming) to understand that since the Industrial Revolution started ~250 years ago, we've automated away most jobs several times over, while employment levels have stayed pretty constant.

              "Automating half the jobs" is the same as "double productivity per worker".

              When the doubling happens in 5 years rather than 50, it might be more disruptive, but I'm convinced we're on the verge of huge improvements in human standard of living!

              • wartywhoa235 hours ago
                What in the current state of world affairs outside of IT do you think is indicative of that potential for huge improvements in human standard of living?
                • BurningFrog3 hours ago
                  It's what I mentioned:

                  If we double productivity per worker, we have twice as much wealth on average.

                  I know there are angry people convinced that this will all be consumed by billionaires and jews, but historically that is not at all the track record of the last 250 years, and I expect that to continue.

            • ryandrake5 hours ago
              Look at how the billionaires are talking about AI: Their clear, unambiguous goal is basically to replace all white collar "knowledge" jobs. And there's currently nothing regulatory that's stopping them--they just need to wait for the state of the art to improve. Once AI is "good enough" if it ever is, they won't even think twice about 45% unemployment. What are we unemployed workers going to do about it? There's no effective labor organization left. Workers have basically no political power or seat at the table. We're not going to get violent--the police/military are already owned by the billionaire class. We're just going to eventually become economically irrelevant and die off.
              • geodel5 hours ago
                > We're just going to eventually become economically irrelevant and die off.

                As harsh it may sound, it seems rather likely to me. It is not like s/w engineers have helped struggling workers in other sectors other than sanctimonious "Learn to code" advice. So software folks can't expect any solidarity or help from others.

              • kiba5 hours ago
                The fundamental issue isn't unemployment due to automation, but the fact that society cannot benefit from unemployment.

                It should be something for us to celebrate, because it means greater freedom for humans to pursue something else rather than spending time doing drudgery.

                • shinryuu4 hours ago
                  Put it another way, the issue is that resources are not shared more equitably. This is especially egregious considering that LLMs are trained on all human knowledge. We've all been contributing to this enterprise, and what we may end up getting in return is unemployment.
              • monknomo5 hours ago
                45% of folks sitting on their hands are going to have the free time to talk, and this group of people are skilled at organization. Are you planning on throwing your hands up and passively accepting whatever comes your way?
                • rootusrootus4 hours ago
                  And at least in the US they have >45% of all the small arms weaponry. There is no bunker strong enough nor private army big enough if 100M people come for you.
                  • ryandrake4 hours ago
                    They're probably be betting that the technology they will need to defend their bunkers, think autonomous kill-bots or whatever, will emerge before people start to riot.

                    Or they're planning to build an Elysium-like colony in the ocean or space, to keep the billionaire class far from danger.

              • rootusrootus4 hours ago
                I get that it is popular to hate billionaires these days, but realistically, they did not get to be billionaires by being stupid. It runs directly counter to their own interests to induce anything like 45% unemployment. They will get poorer, the world they live in right along with the rest of us will get noticeably shittier, etc.

                More likely they figure out what to do with a bunch of idle talent. Or the coming generation of trillionaires will.

              • 5 hours ago
                undefined
          • bborud6 hours ago
            We never noticed how easy the code writing part had already become because it happened slowly. Through mechanical means, through the ability to re-use code, and through code generation.

            Heck, even long before LLMs about 10% to 30% of my code was already automatically generated. By tooling, by IDLs and by my editor just being able to infer what my most likely input would be.

            > We have robots walking just fine now, by the way.

            I don't think you got the point I was trying to make.

            • SoftTalker6 hours ago
              True, but I guess I see a distinction between scaffolded/templated boilerplate or autocomplete and actual application logic. People have generated boilerplate from templates for ages, as you say. RoR maybe a pretty good example, but there wasn't even early-days AI involved in doing that.
          • phkahler5 hours ago
            >> We used to say that (not long ago, even) about the code-writing part. Why do we believe that LLMs are going to stop there? Why do we think they won't soon be able to talk to people, listen, and determine what they need?

            Because they are currently "generative AI" meaning... autocomplete. They generate stuff but fall down at thinking and problem solving. There is talk of "reasoning models" but I think that's just clever meta-programming with LLMs. I can't say AI won't take that next step, but I think it will take another breakthrough on the order of transformers or attention. Companies are currently too busy exploiting the local maxima of LLMs.

            • rootusrootus4 hours ago
              > Companies are currently too busy exploiting the local maxima of LLMs

              I get the feeling we can already spot the next AI Winter. Which is okay, we need a breather, and the current technology is useful enough on its own.

          • terseus6 hours ago
            > Why do we believe that LLMs are going to stop there?

            Why do you believe they wont? I think it's reasonable to assume that we will hit a ceiling that current models will not be able to break.

            > We have robots walking just fine now, by the way.

            Walking and reasoning are unrelated abilities.

            • SoftTalker6 hours ago
              Walking was given as an example of "hard to program a robot to do it" by GP. Well, now we have robots that can walk.

              What evidence is there that LLMs have hit a ceiling at being able to do things like talk to users or stakeholders to elicit requirements? Using LLMs to help with design and architecture decisions is already a pretty common example that people give.

      • vga16 hours ago
        >bosses

        The AI is coming for those too.

        • snozolli4 hours ago
          Something like five to ten years ago, when AI hype was starting to hit media, one of the claims was that AI would come for middle-management first. Since middle-management can generally be described as collecting information from underlings and reporting information to upper management, their work was supposed to be easy to automate with AI. As far as I can tell, this hasn't proven to be true at all, and we software engineers proudly wrote ourselves out of work by constantly publishing our source code and discussing it openly.
      • at-fates-hands4 hours ago
        >> Or bosses will just expect more output for same/less pay, with them having to use AI to keep up.

        Anecdotal evidence to support this.

        I work with both dev and design teams. Upper management has already gone through several layoffs and offshoring of the two dev teams I work with. The devs they did keep were exactly what you said. The capable ones who reliably closed their Jira tickets. Never missed a deadline for building their features or components. And now? Their work has tripled and now the only help they get from management? "Start to figure out how to leverage AI, we're going to be a in hiring freeze for the next 10 months."

        The double whammy of losing onshore team members and not getting any help from management to fix the problem they just created and essentially just telling them to figure out how to use AI to keep up is pretty staggering.

        I would echo what one of the devs told me, "If this is the new "AI era" than you can count me right the fuck out of it."

      • thisisit5 hours ago
        Lot of people don't seem to get that - It is easier to go from terrible to average but much harder to go from average to good.

        I am sure AI bros are same people who were convinced consumer grade fully automated driving was going happen "by end of the year" for last 7 years.

        • Peanuts993 hours ago
          I agree with the statement and think a lot of people miss this, but I also wonder how many people probably don't care for good, they only care for 'good enough'.
          • manmal2 hours ago
            Many large systems can’t be built good enough because they just fall apart. Try letting a junior dev make an ERP or a database system.
        • lostmsu5 hours ago
          No, I never believed in fully automated tale by Tesla, but as the LLMs improve my personal estimate for the date of human-level AGI is rapidly moving to "present". Before GPT-2 I had it somewhere in 2100, at GPT-2 I thought maybe by 2060 if we are lucky. Now I think it is 2035 or maybe even sooner.
          • rootusrootus4 hours ago
            I like to see the optimism, even if I don't share it. I think it's incredible hubris that humans think we are about to reinvent our own level of intelligence, just because we made a machine that talks pretty.
            • lostmsuan hour ago
              Your own comment in my timeline is 7 years out of date. GPT-2 talked pretty, that was its whole thing. If you are trying to claim there's no difference between 5.5 and 2 you are delusional (hallucinating?).
              • rootusrootus34 minutes ago
                I think I was fairly clear, I said that I think it is hubris to think what we have created is anything even slightly like human intelligence. It talks very pretty (a lot of work has gone into this aspect in particular), and it does demonstrate the extent to which, as individuals, most of us do not have especially unique thoughts nor problems to solve. It exposes how quickly humans jump to anthropomorphizing pretty much anything.

                Is it a handy tool? Yep! I use it every day. But it is laughable to think this is the path to AGI. The most common counterargument on HN is some variation of "but you can't prove that this isn't just like how a human thinks". A conspiracy theory at best, just reinforcing the fact that we know very little about how even simple non-human brains function.

      • oblio6 hours ago
        >> I understand things and then apply my ability to formulate solutions

        > The AI is coming for that too.

        In that case all [1] non manual work is doomed, until robotics has an LLM moment.

        [1] With the exception of all fields protected by politics or nepotism.

        • rootusrootus4 hours ago
          > all non manual work is doomed

          All work in general. Knowledge workers can still do manual work, and will compete to do so when there is no option to continue what they do today.

      • wiseowise3 hours ago
        [flagged]
    • Insanity4 minutes ago
      Yeah coding speed was almost never the bottleneck I found. AI now does the typing and (some) of the thinking. It doesn’t figure out what needs to be built and how it all plays together (yet).
    • amw-zero2 hours ago
      I agree in principle, but I think the 2-5% estimate is extremely low. I could be sold on most developers spending ~25%, up to 40% of their time on code. But very few people are spending 2% of their time on it. Unless you're some sort of super senior staff / advisor to the CTO at a gigantic company, which has already placed you on rare terrain.
      • siren2026an hour ago
        Most people overestimate how much time they spend "writing code".

        I interviewed a ton of people in my career and when I ask "how much time writing code on your last job?". The more junior the person the more they would overestimate the time writing code (Some would say 90%!). Once they joined I was able to see how much time they really wrote code and it is almost never more than 30%.

        Mostly because the code is only the final output. You spend most of your time doing research, talking to people. Working on Quarterly OKRs, going to meetings etc.

        If you just write code you are either an extremely junior person that works on things trivial enough to not have to research or your are disillusioned and you don't realize you spend most of your time doing other things

      • d3rockk2 hours ago
        might be closer to accurate if the 2-5% is his estimate of the physical time spent making key-strokes
        • bee_rideran hour ago
          Surely we should only count the time actuating the key. Apple keyboard users are in shambles.
          • layer827 minutes ago
            Only the keydown, not the keyup.
        • siren2026an hour ago
          Nope. I would bet most people really only do 2-10%.

          But we would like to convince ourselves we don't.

    • brandensilva4 hours ago
      I remember being that kid in high school who ran math and logical problems hard which contributed to me being very technical and to learn to push through painful mental challenges on the regular. Out of most of my graduating class there were not many of us that went on to become engineers for a reason because it isn't easy work by any means and I'm guessing is quite draining for people who don't use their brain like we do.

      So while AI will change the industry I don't see any reputable company firing the smartest ones in the room for junior level intelligence.

      Even with it advancing someone has to be responsible for when it screws up which we know it will.

    • hateful6 hours ago
      Not sure where I first heard this, but I say it to my team all the time: "Programming is thinking, not typing"
      • strbean5 hours ago
        I know a an accomplished CS professor, ACM fellow, cited in Knuth's TAOCP (as well as being an easter egg!), who still hunt-and-pecks. In fact, hunt-an-pecks incredibly slowly.

        Seeing him type really reinforced this idea.

      • the_hoffa5 hours ago
        I've always told my Jr Engineers to "think twice, code once".

        If I gave them a task and they immediately started typing it out, I would tell them to stop typing and ask them to explain to me what they were doing; they'd often just spit out what they thought the code should do, and I'd often point out edge cases they missed and would have missed had they just spit out code and a PR, wasting everyone's time. I would also insulate them from upper management to give them time to actually think (e.g. I wouldn't be coding so they could think then code).

        To your point and to the GP's point, and one point I keep raising with LLM's: "typing is not where my time sinks are"

      • CodeMage5 hours ago
        That's very true, which is why I find it insulting that so many AI proponents use the word "typing" to refer to writing code. It carries an implication that if you enjoy writing code by hand, you enjoy a mindless activity.
    • ravenstine3 hours ago
      This answer makes two big assumptions that haven't been proven out yet.

      - Understanding code without writing it is as viable as understanding code that you've worked with directly or indirectly

      - Businesses care that you understand code

      I really doubt the first one. Traditionally, understanding a code base in large part came from working with it intimately and building that muscle memory. The idea that understanding code by reading it is as good as understanding it from writing it, in my opinion, is not realistic.

      Whether businesses care that their engineers (which they are increasingly viewing as monkeys at LLM typewriters) to understand the code remains to be seen. I don't think they particularly care whether their code runs slow and is buggy so long as it works just enough to churn out features and continue to pull income.

      • simonw3 hours ago
        > The idea that understanding code by reading it is as good as understanding it from writing it, in my opinion, is not realistic.

        As one of those developers who has written almost no significant code by hand since November 2025, but has produced a great deal of working software, I still understand the majority of the code I've produced just as well as if I'd typed it myself.

        I may not be typing it myself, but I'm manipulating it constantly. It's not as simple as "reading" it - I'm reading it, executing it, figuring out refactorings for it, having tests built for it, having documentation built for it, sometimes writing that documentation myself, spinning up example scripts that use it, then building new code that depends on that previous code.

        It's that act of exercising the code that gives me confidence that I understand it.

      • foobarian3 hours ago
        > understanding it from writing it

        On the surface it sounds weird - why would this be?

        Possibly because building a system is not a one-shot step, but a process of many iterations, each of which involves experiments in production, and gaining more learnings. So at the end of the process, you don't just have N lines of working code, but also N lessons learned along the way. So presumably with the AI process we miss out on half the value.

        Now the going thesis is that this extra value is unnecessary if we take the plunge and don't look back. My gut says the answer is somewhere halfway, I guess we'll see.

    • czhu124 hours ago
      Isn't the long term trend just that we don't need as many engineers, not that there will no more software engineers?

      Theres another, different loop I keep seeing which is:

        - Company A lays off engineers citing AI efficiencies
        - People say its because of over hiring during 2020
        - Company B lays off engineers citing AI efficiencies
        - People say its because it was never a good business
        - Company C lays off engineers citing AI efficiencies
        - People say its because theres a recession
      
      I guess to cite a counter example, unemployment is still super low, software jobs are still holding up, but the bear case is that eventually 5% of people will be able to do what people do today, and the demand for software won't grow at the same pace.
      • Xirdus3 hours ago
        If company A is Amazon, company B is Ubisoft, and company C is Oracle, then I think it's very likely there isn't any pattern or "loop" here and it's legitimately just 3 different companies in 3 different situations doing layoffs for 3 different reasons but all 3 reaching for the same PR playbook. "We're leveraging AI to increase productivity" is the new "we're streamlining our business and focusing on our core products".
    • sefrost6 hours ago
      Only 5% of your time is spent writing code? That sounds like a low estimate for most software engineers I work with.

      May I ask if you could estimate how you spend the other 95% of the time?

      • hatthew4 hours ago
        In no particular order

            - Meetings
            - Reading papers
            - Understanding legacy code
            - Reading internal news
            - Ad hoc chats with coworkers
            - Writing docs
            - Editing configs
            - Thinking about solutions
            - Slacking off
            - Analyzing results
            - Testing code
            - Reviewing PRs
            - Understanding others' ongoing projects
        • PizzaBorsch4 hours ago
          AI can do everything you listed except chats with coworkers and slacking off.

          I just don't think you've utilized the most recent versions of codex or claude.

      • Enginerrrd4 hours ago
        It sounds plausible to me since this is pretty en par with most other engineering disciplines. I’m a civil engineer. My responsibility is ultimately mostly to produce a constructable plan set. I spend far less than 5% of my time drafting or modeling.
      • davidw6 hours ago
        Commenting on Hacker News?
        • wartywhoa234 hours ago
          For those who claim to be developers who code no more than 5% of their time and resort to arguments like "we're already not writing machine code by hand for 50 years, how is AI different from a higher level language?", it's not commenting, it's shilling for the AI corpocracy on HN.
          • truncate3 hours ago
            >> "we're already not writing machine code by hand for 50 years, how is AI different from a higher level language?"

            I never got that argument. Compilers are formally proven, deterministic algorithms . If you understand what compiler does, you can have pretty good idea what it will produce. If it doesn't do that, its a bug. Definition of correctness is well defined by semantic equivalence.

            LLMs are none of that. Its a fuzzy system that approximates your intent and does its best. I can make my intent more and more specific to get closer to what I want, but given all that is just regular spoken language its still open to interpretation. And all that is still quite useful, but I don't get the assembly language comparison here.

          • cobbzillaan hour ago
            By extension, does this imply that all the HLL advocates from decades past were shilling for compiler companies?
          • 2 hours ago
            undefined
        • icedchai5 hours ago
          In all seriousness, communications consumes a lot of time. Meetings, emails, Slack messages, pestering stake holders and other developers...
          • hjort-e5 hours ago
            If you spend 95% of your time on that stuff, you better be working on like critical infrastructure where nothing can go wrong, otherwise you are in an incredibly dysfunctional company.
            • icedchai5 hours ago
              I agree it would be absurd for it to take 95% of your time. I have, however, seen that it takes a lot more time than one would think.

              I did some contracting work for a severely dysfunctional meeting heavy organization and it was about 2 hours of meetings for every hour of real technical work!

              • hjort-e4 hours ago
                Ah yes agreed, if it's more than 90% it just signals to me that a developers skills are probably being wasted too much on business/coordination stuff.

                But i guess if we mean actual time tapping your keyboard making code, then it's true some days for senior+ devs, but definitely not technical work overall.

              • fragmede4 hours ago
                So about 26 hours of meetings to 13 hours of "real technical work" per week, but that's is 33%, not 5%.
              • skydhash4 hours ago
                Even when it’s not dysfunctional, you spend a lot of time on communication and reading stuff other people wrote (including code). It’s very rare to work in isolation.
                • hjort-e4 hours ago
                  I guess it depends on what you feel coding is. To me it's the architecture planning and reading other people code, not just writing code. If we say it's just typing, then 95% is not absurd no
                  • skydhash4 hours ago
                    > it depends on what you feel coding is. To me it's the architecture planning and reading other people code, not just writing code

                    And that would be where we disagree. I don’t read code to look at code. When I’m reading code, I’m looking for the contracts to follow when interacting with a system. It would be nice if it were documented, but more often than not you have to rely on code.

                    It’s very rare that I plan with a technical mindset. Yes I use the jargon, but it’s all about the business needs. Which again create contracts.

                    Same with writing code. Code is like English for me. If I don’t have a clear idea on what to write, I stop and do research (or ask someone). But when I do, it’s as straightforward as writing a sentence.

                    • hjort-e3 hours ago
                      Huh? So you you don't research if something is technically feasible before you promise your stakeholders a delivery time/ price estimate?

                      We all do the same stuff, the disagreement would just be what you feel coding is and if you think technical work is the same thing or a superset. If you as software dev aren't hands on with planning or working more than 5% of your time, you are basically a PO with a programing hobby

                      • skydhash38 minutes ago
                        > So you you don't research if something is technically feasible before you promise your stakeholders a delivery time/ price estimate

                        I believe 99% of requests are not about what’s technically feasible. And the rare time I encountered one of those, my answer has mostly been “you don’t have enough resources to try solving that problem”.

                        If you know your fundamentals well, very often you will find the same common blocks everywhere. People much more smarter than me has solved a lot of fundamental issues and it’s rare that I see a business request that doesn’t reuse the same familiar stuff.

                        That’s why coding is mostly boring. You follow the same pattern again and again. But what dictates the flows are the business parameters. And that’s why most senior spend so much time gathering good requirements. Because the code is straightforward after that.

      • varispeed4 hours ago
        The least experienced developer writes the most code. Juniors would be spending whole day in the IDE, typing, testing, typing etc. Senior developers will go to a park for a few hours, think, then come back spent an hour or less typing code that just works or write nothing at all, maybe even delete code. Instead they might update documents, ask clarifications about found edge cases or errors in planning that were not considered.
        • sefrost21 minutes ago
          I don’t know if that’s true for most of us, who simply work in CRUD apps. Maybe I’m in a bubble though.
        • nomel4 hours ago
          Since software is in every industry of man, I think you'll need to mention which industry this perspective is coming from. This is definitely NOT the case in certain industries.
          • varispeed3 hours ago
            Finance, web services, service integration
      • FatherOfCurses3 hours ago
        Sneering at "kids these days"
      • mxksisksm6 hours ago
        [dead]
    • pjmlpan hour ago
      Usually that means you're already a senior developer, understanding things and formulating solutions is part of work delegation.

      Now those juniors whose job is to implement those solutions, they will have a hard time.

      On my 50s, I also don't write as much code as I used to, even less nowadays with serverless, managed services, low code/no code tools, agent orchestration workflows, and with it I keep seeing development teams getting smaller.

    • nomel3 hours ago
      The perspective here is "lifetime career", so you need to project out 30 years here, for a meaningful argument.

      I think, much sooner than that, you'll have AI pumping out practically complete implementations that meet the requirements of function, set by the people who desire that function. THOSE people will be the developers, and will be more akin to technical "creatives", more on the product side, than the developer side.

      • AllanSavageDev2 hours ago
        Someday people are going to get tired of "programming in English" with prompts, getting inaccurate output, etc and someone is going to invent a higher level kind of CODE that allows the user to directly specify the actions the computer should take to solve the problem. Later someone will invent a kind of tooling that COMPILES these CODES into a runnable thing skipping the prompt part all together. It might be called something like Unified Prompt Language.
      • an hour ago
        undefined
    • AlexCoventry5 hours ago
      You don't think AI is going to be able to understand things and apply their ability to formulate solutions better than you, in the near future?
      • koonsolo3 hours ago
        In 2000 I learned about this old technology called "neural networks".

        AI really depends on long winters and rare breakthroughs. Deep neural network was the most recent breakthrough.

        The iterations you currently see it just adding more storage, but the fundamental neural network structure doesn't change.

        I'm confident AGI will not be achieved by the LLM architecture, and when the next AI breakthrough is, is anyones guess. But if you take history into account, it will take a while.

    • rhubarbtree3 hours ago
      If you’re a developer and you’re writing code 2% of your time pre-Claude, that’s 9 minutes a day, you will and should be fired.
      • wan235 minutes ago
        Things besides writing code that you might be doing:

        - Meetings

        - Code reviews

        - Manual testing

        - Deployments and more testing

        - Triaging issues

        - WTF how did this bug happen?

        - JIRA in general

        - Whiteboarding sessions / Design docs

        - Interviews

        - 1:1s (mandatory ones)

        - 1:1s (networking / problem solving / political alignment)

        - Whatever your company's version of corporate extracurriculars is

    • dev_l1x_be4 hours ago
      And most of the time the statistical aspect of LLMs result in a less creative solution that is more expensive to run and harder to maintain. LLMs at this stage are good at scaffolding, generating the boilerplate you do not want to write and glue things together quickly. It just makes engineers faster.
    • madduci5 hours ago
      Because that classifies in "developers" and "software engineers". And software engineering isn't going to disappear anytime soon
      • hellojesus5 hours ago
        Weird. I call myself a developer because I don't have an engineering degree from an abet certified engineering program.

        I recognize, in some capacity, that this isn't the norm and in the US "professional engineer" is protected and not simply "engineer", but it feels akin to stolen valor to me.

        • borski5 hours ago
          If there were a license in the US for it, I’d agree with you. But as is, if you are “doing” engineering, you’re an engineer.

          If you are a licensed engineer of some kind, you’d state that outright.

          The equivalent of stolen valor would be claiming to be a licensed software engineer; except there is no such license so it would also be fraud, misrepresentation, etc.

          (I know this is different elsewhere)

          • VonGallifrey5 hours ago
            > If there were a license in the US for it, I’d agree with you.

            Yeah, that is basically the thing in my country. You can't call yourself an engineer without passing a test, but I can't take it because there isn't one for software engineering.

            Same thing for freelancing. Freelance jobs are defined in a list, and other jobs cannot benefit from the simplified tax rules that freelancers enjoy, but that list was written before software development was a thing.

        • traderj0e5 hours ago
          I call myself a computer programmer unless someone is asking for my official job title (software engineer)
        • bilbo0s5 hours ago
          I'm a software dev in the US and I never call myself "engineer" in that capacity. Always "programmer" or "developer".

          I agree. Engineers have to clear a much higher bar. Even though my career was spent in medical diagnostic software where we had to get 510k clearance, I was still keenly aware that this was a fundamentally different activity from actual engineering.

          • whstl5 hours ago
            I'm an electrical engineer that moved to software engineering and there's a lot of commonalities between what I do now and what I did previously as an electrical engineer. The bar might seem high, but that's the only way I know how to work, honestly.

            On the other hand, with the modern division of labour in a lot of companies and with the rhetoric I see here in HN and in other places: a lot of developers are indeed not even close to being engineers.

    • hyperjeff5 hours ago
      You’re a ”developer“, i guess, but not a coder (anymore), which is what your interlocutors are probably asking about. You’ve migrated to a middle manager job, not something they probably can just start doing competently. Essentially you’re agreeing with their initial sentiment, that coders will be made irrelevant.
      • onethought5 hours ago
        I think it’s more nuanced. Even a “coder” spends the majority of their time, not coding.
    • m4634 hours ago

        - Compilers will make developers irrelevant
        ...
        - Compilers can write assembly language code
      
        - Compilers have -O3 now
      
      etc...

      Maybe we should rejoice. I remember dreading writing documentation, and now I would happily hand that off to AI.

      • geodel4 hours ago
        It is indeed exciting (for you at least). The problem is for most people is not that AI is spewing out code and reading documentation while developer do more interesting things. It is that companies are handing over the job of those developers to AI itself.

        So those ex-developers are free to do most interesting things in the world with little change of not relying on nice, steady paychecks every month.

    • dawnerd3 hours ago
      The problem is people think AI can replace the 95-95% that isn't code too. That's where we end up with massive unusable codebases that no one understands.
    • timedude4 hours ago
      > Business owners who think they can do without developers because they think LLMs replace developers are fine by me too. Natural selection will take care of them in due course.

      Thing is, natural selection will take care of you at the same time. Because you'll also come to rely on products they make, or services they offer, either directly or indirectly. So eventually, you too, will suffer the consequences of the enshloppification.

    • dakiol4 hours ago
      That doesn't hold because the goal for executives is to increase revenue and the main sales pitch of Anthropic et al is to pay for agents instead of paying for engineers. That means 80% of the workforce is out no matter what. Whether or not one belongs to the remaining 20% is a different story, but obviously not all of us will be there.

      > I understand things and then apply my ability to formulate solutions

      AI is coming for that too. Don't be naive

      • varispeed4 hours ago
        It will be interesting for governments using workers as proxy for taxing corporations.
    • vagab0nd3 hours ago
      This is a valid perspective, but I don't think a useful one.

      Being able to produce code is a huge unlock for many non-programmers. So in a way, it doesn't matter how much time existing developers spend on coding. It's about helping anyone become a developer.

    • jchonphoenix4 hours ago
      You miss the major factor in your compensation: pricing pressure due to supply/demand.

      By removing all the junior engineers, you've fundamentally changed the market forces longer term and most people expect that to negatively impact you in the supply demand curve regardless of whether or not the statements you've made above are true, which they most likely are for senior engineers.

      • fragmede4 hours ago
        In removing junior developers, leaving only senior developers, wouldn't that reduce supply, making the price go up, not down? It's been a while since Econ 101 for me though.
    • rpdillon4 hours ago
      This is exactly it. The speed of light has not changed: we're limited by our ability to understand the system, and make decisions about what to do next. AI will speed that up, but the core work is the understanding and decision-making.

      Saying otherwise is sort of like reducing the task of writing a novel to typing.

    • fnordpiglet4 hours ago
      Something missed in that computer science was a highly theory driven discipline where people were taught how to think critically about solving complex problems. Industry complained they weren’t teaching enough programming skills, so they dumbed down the thinking part and emphasized the vocational part. Now the vocational part is virtually useless, and the grounding of theory applied to complex problems is suddenly really relevant again. Schools will take time to retool their programs, teaching staff, and two generations if not three graduates will have entered into a work environment that doesn’t need what they learned.

      As someone 35 years into my career I agree this is the most exciting part of my career. I love programming and I do it all the time but I do it by reading code and course correction and explaining how to think about the problems and herding cats - just like working with a team of 100 engineers. But the engineers I’m working with now by and large listen, don’t snipe me on perf reviews, aren’t hallucinating intent based on hallway conversations with someone else, etc. This team of AI engineers I have can explain to me their work, mistakes, drift, etc without ego and it’s if not always 100% correct it’s at least not maliciously so. It understands me no matter how complex the domain I reach into, in fact it understands the domain better than I do, so instead of spending a few months convincing people with little knowledge or experience that X is a good idea, I can actually discuss X and explore if it’s a good idea or not and make a better informed decision. I’ve learned more in these discussions than I’ve learned in decades of convincing overly egoistic juniors and managers to listen to me about something I’m an industry authority on.

      However I see very clearly we will need very few of the team of 100 human engineers I can leave behind in my work. Some of will be there in a decade, but maybe less than 1:10. This is going to be a more brutal time than the Dotcom bust for CS grads, and I don’t think it will ever improve. Mostly because we simply won’t need the “my parents told me this makes money” people, just the passionate folks remain. But even then, we face a situation where the value of any software developed is very low because so much software is being developed. It’s going to turn into YouTube where software that is paid for is very small relative to the quantity of software developed. We already see this in the last few months with the rate of GitHub projects created. If the value of any software created is low, the compensation of the creator will be low unless they’re very rare talents.

    • ryandvm7 hours ago
      I dunno, man. I've been doing this for 20+ years and I think we're at a really important fork in the road where there are two possibilities.

      The first is that AI is achieving human-level expertise and capability, but since they're now being increasingly trained on their own output they are fighting an uphill battle against model collapse. In that case, perhaps AI is going to just sort of max out at "knowing everything" and maybe agentic coding is just another massive paradigm shift in a long line of technological paradigm shifts and the tooling has changed but total job market collapse is unlikely.

      The other possibility is that we're going to continue to see escalating AI capability with regard to context, information retrieval, and most importantly "cognition" (whatever that means). Maybe we overcome the challenges of model collapse. Maybe we figure out better methodologies for training that don't end up just producing a chatbot version of Stack Overflow + Wikipedia + Reddit. Maybe we actually start seeing AI create and not just recreate.

      If it's the latter, then I think engineers who think they are going to stay ahead of AI sound an awful lot like saddle makers who said "pffft, these new cars can only go 5 miles per hour."

      • golddust-gecko6 hours ago
        100% this.

        I'll also add another factor: it's become increasingly clear at our company that AI-enabled humans are getting to the bottom of the backlog of feature ideas much quicker. This makes the 'good ideas' part of the business the rate limiting step. And those are definitely not increasing with AI, beyond that generated by the AI churn itself ("let's bolt on a chat experience or an MCP!")

        So maybe the coding assistants don't get a 10x improvement any time soon, but we see engineering job market contraction because there aren't really enough good ideas to turn into code.

        • hibikir5 hours ago
          Yes, but as the price of getting work done goes down, a lot of companies that were priced out of custom software before now can hire devs, as the value hiring a few can provide just goes up. Fewer people per product, absolutely. No more teams of 10 or 20 working on the same thing. But there's so much out there that doesn't get done at all because you'd never be able to afford it.

          Simple marginal thinking: When you lower the price of something, it gets more use cases. A rich person might not take even more flights because they are cheaper, but more people will consider flying when they wouldn't have at old prices

      • bborud6 hours ago
        You are supposing that AI is achieving human level expertise and capability is a given. I am not so sure. Right now that's much further from the truth than one might think at first glance.
      • WorldMaker5 hours ago
        > max out at "knowing everything"

        LLMs know nothing but are great at giving the illusion that they know stuff. (It's "mansplaining as a service"; it is easier to give confident answers every time, even if they are wrong, than to program actual knowledge.) Even your first case seems wildly optimistic. The second case is a lot of "maybes" and "we don't know how but we might figure it out" that seems like a lot to bet an entire farm on, much less an entire industry of farms.

        We sure are looking at a shift in the job market, but I don't think it is a fork in the road so much as a Slow/Yield sign. Companies are signalling they are willing to take promises/hope to cut labor costs whether or not the results are real. I don't think anything about current AI can kill the software development industry, but I sure do think it can do a lot to make it a lot more miserable, lower wages, and artificially reduce job demand. I don't think this has anything to do with the real capabilities of today's AI and everything to do with the perception is enough of an excuse and companies were always looking for that excuse. (Just as ageism has always existed. AI is also just a fresh excuse for companies to carry on aging out experience from their staff, especially people with long enough memories/well schooled enough memories to remember previous AI booms and busts.)

        But also, yeah if some magic breakthrough makes this a real "buggy whip manufacturer moment" and not just an illusion of one, I don't mind being the engineer on that side of it. There's nothing wrong about lamenting the coming death of an industry that employs a lot of good people and tries to make good products. This is HN, you celebrate the failures, learn from them, and then you pivot or you try something new. If evidence tells me to pivot then I will pivot, I'm already debating trying something entirely new, but learning from the failures can also mean respecting "what went right?" and acknowledging how many people did a lot of good, hard work despite the outcome.

        • anon848736284 hours ago
          I'm skeptical of LLM "reasoning" but they sure as hell know a lot. That's what the embeddings are: a giant semantic relationship between concepts.
          • WorldMaker3 hours ago
            Embeddings are still mostly just vectors into n-dimensional K-means clusters. It isn't "knowing" two things are related and here's the evidence, it is guessing two things are statistically likely to be related, based on trained patterns, and running with it without evidence.

            It has no "semantic understanding" as we would define it. It's just increasingly good at winning cluster lotteries because we've increased the amount of training data to incredible heights.

          • wiseowise3 hours ago
            Encyclopedia and Wikipedia know a lot too. Knowledge isn't much of use on its own, it's about how you use it.
          • koonsolo3 hours ago
            I agree with you, but a big drawback is that the accuracy or confidence of their output can't be estimated.

            So they surely know a lot, but you are never sure if the info is correct or not.

      • koonsolo3 hours ago
        Do you think the latter can be achieved with the LLM neural network architecture? I highly doubt it. Neural networks are very old tech, and it took us that long to get us here.

        I'm sure we'll reach AGI at some point, but looking at AI history, I don't see that coming any time soon.

    • 5 hours ago
      undefined
    • 4 hours ago
      undefined
    • xhevahir4 hours ago
      The "apply my ability" is doing a lot of work, so to speak, in the above exchange. Work that might eventually well be automated away.
    • 3 hours ago
      undefined
    • bdangubic3 hours ago
      > Yes, about 2-5% of the time. Less now.

      I spent 2nd half of my 30y career fixing organizations and process where this was the case. so many things are wrong in places where this is the case (or alternatively you need a different job title :) )

    • atoav6 hours ago
      Saying being a programmer is about writing code is a bit like saying being an artist is about drawing lines on a canvas.

      Yeah technically drawing lines on canvases may be an very important part of being a painter, but it is hardly the core of what makes or breaks great art.

    • insane_dreamer6 hours ago
      What you described are senior developers and system architects.

      Junior developers spend most of their time writing code (when they're not forced to attend pointless standups, because Agile/blah/blah)

      > The developers who still think their job is about writing code will perhaps not have a job in the future.

      So you're saying the same thing everyone else is saying. SWEs won't go away, but they will be greatly reduced, because those whose job is about writing code -- junior devs -- will be replaced.

      (How will Sr Devs in the future be created? That's the question, isn't it.)

      • vineyardmike5 hours ago
        > How will Sr Devs in the future be created?

        As an extreme example, maybe we’ll see long-running internships and trainings like doctors experience. Doctors don’t start their career until ~12+ years of prep and training.

        Pragmatically, software development has a lot of examples of teenagers making apps and college students building software companies. In the 12 years it takes for training, low-knowledge workers could be vibe coding continuously replacements of most commercial software products they’d be hired to build. So I doubt we’ll treat software development as a rarified high skill job.

    • boring-human5 hours ago
      The true argument is about quantity - of people, not code. All qualitative arguments are missing the point.
    • bluegatty3 hours ago
      This is maybe a bit myopic.

      Dude - look what happened in the last 2 years on software.

      Now project out another 10.

      I totally agree with you 'as of now, in the current paradigm'.

      But that could very well change.

    • izacus5 hours ago
      Note that just because you know the job is understanding things, the manager who'll boot you and leave you without income probably doesn't. They'll just get their political cookie points for saving money by replacing you with AI.
    • jstummbillig3 hours ago
      > Multiple times per week I have the same conversation.

      Really? I mean, good on you if it's true and you like the attention but that's sounds like an implausible amount of interest in someone and their relatively mundane profession.

    • coldtea5 hours ago
      >- I understand things and then apply my ability to formulate solutions

        - Well, and AI can do part of that too, maybe more of it soon.
        - ...
        - Besides, you don't need 10 guys in a team to do that. A couple of them will do, then AI will do the coding. What will happen to the rest?
        - ...
    • doctorpangloss3 hours ago
      In my community almost all problems are political. "Problem solving ability" matters if you are HFT, but everything else? Math can't tell you the best way to use land, educate a kid, what to pay for healthcare and how, how to prioritize biotech research, set a minimum wage, decide congressional maps, all sorts of stuff that actually I pay for or care a lot about. in fact I think you are totally misinterpreting what people are saying to you, you are 200% wrong: the 2-3% of your time spent coding was the valuable part, and your so called problem solving ability rarely solved any real problems.
    • foldr5 hours ago
      I think the future is pretty up in the air in this respect, but my guess is that AI will just lead to another shift in the set of knowledge that a 'real programmer' is expected to have. I'm old enough to remember when people would make fun of web developers for 'programming' using HTML and JavaScript. And of course, back in the day, you couldn't be a real programmer unless you wrote assembly language. In a few years' time, being able to write (as opposed to read) source code in any specific programming language will probably become a niche skill. The next generation will be able to read Python to about the same extent that I can read x86 assembly.

      Perceptions of what knowledge counts as 'low level' are constantly shifting. These days, if you write C, you're a low-level, close to the metal programmer. In the 70s, a lot of people made fun of Unix for being implemented in a high-level programming language (i.e. C) rather than assembly.

    • keybored6 hours ago
      Pure wage workers should consider dropping the attitude about how tech progress will just make their inferiors in the same line of work be out of a job (hrmph good riddance etc.). Because this pseudo-progress could creep up on them as well.

      Then you won’t have this just world of the deserving workers at all. Just formerly deserving workers and idiot billionaires like Musk (while the robots do all of the work).

    • jmyeet2 hours ago
      This an example of survivor bias dressed up as general advice that doesn't consider the entire ecosystem. And we need look no further than what's happened in Hollywood with writing in particular.

      The general progression of a Hollywood writing career is from PA (production assistant), which often starts off as a volunteer "intern" position, to writer's assistant. Assistant here usually means doing any meanial task anyone wants from fetching drycleaning to taking a dog for a grooming appointment. When you're a writer's assistant, you will oten spend time in a writer's room. You will see how the process works. You probably won't contribute anything but you may get feedback on tehings you've written from whomever you're working for.

      The next step is as a staff writer. You will be paid to produce scripts and stories for a TV show, for example. That writer's room will have a head writer. On a TV show the head writer is almost always the showrunner. The showrunner is effectively the leader of the entire project and is responsible for breaking up a season intoo storylines and making sure those scripts make sense as a collective. They might one or more of those scripts or maybe not. The showrunner will hire directors for each episode.

      The path from staff writer to showrunner often goes through being a producer. Producers are responsible for a lot of the logistics of filming a show. Hiring extras, finding locations, coordinating stunts and costumes and making sure the director has everything they need.

      As part of all this, in the 22 episode TV era, writers would often end up spending time on set while the show is being filmed. They'd learn from the process.

      Every part of this was necessary. Those writers on set are your future producers and showrunners.

      So what's happened in the streaming era is that writer's rooms got smaller (so-called "mini writer's rooms"), maybe only the showrunner is ever on set, the writers have stopped working by the time filming even begins and you might only be doing 8-12 episodes. On a 22 eipside season, that one job could support you. 8-12 episodes can't.

      But you see how this all breaks down when writers can no longer support themselves, they're no longer being trained to be future producers and showrunners, there's no feedback from set back to the writer's room and you end up with 3 year gaps between seasons. The only reason for all of this is because it's cheaper.

      So, you may be a staff engineer who tech leads dozens of other engineers. You're not formally a manager or director but you have a lot of influence about the entire project. But how did you get there? You started as a junior engineer being told what to do. You got to see how other leaders operated. You became responsible for more and more things. You might start fixing bugs under supervision to managing a feature then an entire project and so on.

      So what's going to happen here is (IMHO) we will have years of the software engineer space shrinking. There'll be very little entry-level hiring. Layoffs will reduce the entire workforce and there'll be a few tech leaders who hang on because they still produce value. Some of them will probably discover they don't produce enough value and they'll go too.

      But where do the future tech leaders come from in this scenario? AI is being used as an excuse to kill the entry-level pipeline and if you go around and say "git gud" [sic] then I'm sorry but you just don't understand the impact of what's happening or you don't care because, at least for now, you're simply not affected.

      You see the same thing with people who espouse the myth of meritocracy. Well, if a given workforce shrinks by 50%, half those people are, by definition, not going to survive. An individual may be about to reskill or skill up to survive but not everyone can. And that's how people end up in Amazon warehouses. At least until they're no longer needed there ether.

    • surgical_fire6 hours ago
      I normally say that I have zero concerns regarding AI in terms of employment. At most I am concerned in learning the best practices on AI usage to stay on top of things.

      It's ability to write code is alright. Sometimes it impresses me, sometimes it leaves me underwhelmed. It certainly can't be left to do things autonomously if you are responsible for its output.

      Moderately useful tool, but hellishly expensive when not being subsidized by imbeciles that dream of it undrrmining labor. A fool and his money should be separated anyway.

      What I am really concerned about the incoming economic disaster being brewed. I suspect things will get very ugly pretty soon.

  • hibikir5 hours ago
    In my experience, it's been the complete opposite. The very experienced engineers that are actually willing to use top of the line tooling are much better than they were before, including those that are over 40, and over 50.

    Part of the practical degradation of traditional programmers over time has always been concentration and deep calculation, just like in chess. The old chess player knows chess much better than a 19 year old phenom, but they cannot calculate for that many hours at the same speed as before, so their experience eventually loses to the raw calculation. Maybe at 35, or at 45, but you are just not as good. Claude Code and Codex save you the computation, while every single instinct and 2 second "intuition", which is what you build with experience, is still online.

    It's not just that it's a more fair competition: It's now unfair in the opposite direction. The senior that before could lead a team of 6 is now leading a team of agents, and reviewing their code just as before. Hell, it's easier to get the agent to change direction than most juniors around me, which will not be easy to correct with just plain, low-judgement feedback.

    • bel85 hours ago
      But when a senior can do the job of 6 coworkers, what do you suppose will happen to the coworkers?

      In farming, those who were replaced by tractors did not keep their jobs. What is different now?

      • jhrmnn4 hours ago
        Nothing, it’s that same story again. Industrialization turned peasants to blue collar workers by mechanizing agriculture. Then blue collar workers were turned to white collar workers by mechanizing all manual labor. Now AI is coming for white collar workers by mechanizing intellectual labor. The big question is what will white collar workers turn into.
        • ahel3 hours ago
          1. People *across generations* had to skill up. 2. software being very opaque (very differently from agriculture/mechanized labour) imo is linked to having a plethora of support roles that cannot write software but "handle the human part" and help make it readable, while spreading accountability. Hopefully with a "more readable/more standardized" software development, those [product|project|people] management roles can stop being a drag/bottleneck. (code was never the bottleneck we repeat ourselves since age immemorial)
        • siriusastrebe2 hours ago
          The globalized economy has demonstrated that a single country and supply the majority of the world's manufacturing needs, at least for a while.

          Taiwan creates most of the worlds semiconductors. China makes the majority of everything else. Silicon Valley created a majority of the tech market's value.

          But there's a cap where the world has enough stuff at least in the short term, and growth slows.

          Humans only need a certain amount to survive. With populations leveling out, industry will shift from servicing human needs, to the needs of corporations and other industries. Consumers will become a minority in the future economy.

          What will corporations value in the future, that they're willing to spend on recurring human capital expenses? I think the answer will always be: the tasks that will help companies grow.

      • bluesnowmonkey3 hours ago
        With farming, you couldn't just start your own farm, because it requires farmland, and there's only so much of that. But those 6 software engineers can start their own companies, fire up their own team of agents. There's no limit to how many companies can exist in the world.
        • forlorn_mammoth3 hours ago
          and buy their own health insurance, and find their own customers, ...
        • ryandrake3 hours ago
          "Just start your own company!" - Hacker News
          • ge963 hours ago
            It makes sense go bankrupt, start another LLC.

            No I watch/listen to a lot of entrepreneurial stuff since 2016 and I still haven't launched my own product. There's a YT channel "Starter Story" it's like "this person make $100K/mo, here is the template".

            It really is simple though, put a paypal button on a squarespace page and ask someone to pay it.

          • bluesnowmonkey2 hours ago
            Yeah pretty much. Have you seen Polsia and its ilk? Maybe "trivial" would be too strong a word but... in 2026 it's not hard.

            That's my point. You couldn't tell an unemployed farm worker to go start their own farm. They probably don't have the land or substantial capital it takes. But an unemployed software engineer just doesn't need anything like that to go into a business built on AI.

      • koonsolo2 hours ago
        There are jobs with limited work, and jobs with unlimited work.

        Since your farming land is limited, after the job is done, there is no more work.

        For software projects, there is always more work to do. It's an arms race between competitors. Imagine you fire developers to maintain your speed, and your competitor keeps their people to go faster. Good luck to you!

        • sulsan hour ago
          Great way of putting this. I certainly feel like this in smaller companies where each action (or inaction) has a direct consequence on the profitability.
      • Schiendelman5 hours ago
        They build tractors, or sell tractors, or work in agricultural research and development...
        • bel84 hours ago
          I highly doubt that a significant portion of farm labor became salesman or researchers. Builders? I could see that but robots already replaced a portion of those too.
          • Schiendelman2 hours ago
            That's how it always happens. Technology advances, and there are more jobs than we're displaced. That's why technology keeps getting better AND the number of jobs keeps increasing with population.
          • therepanic4 hours ago
            Well, if that's the case, then in your concept the issue isn't what will happen to the programmers, but rather to all the work in general.
          • kingleopold4 hours ago
            less jobs creation is a almost certain for tech, but some people with high IQ get wayy more things done, they already do. This will spread to robots and other areas because robots are not automous yet, maybe will take decade(s). but meanwhile few operators will lead them in a more productive way? That's my bet. It's a clear, logical process with iterations. A lot of things are getting faster with AI, except energy production in some places in the world!
      • logicchains3 hours ago
        >what do you suppose will happen to the coworkers?

        They need to go into business for themselves, and become capital owners, who benefit from AI, not workers who are replaced by it. AI won't be able to compete at entrepreneurship unless robots are given autonomy and property rights like humans, which is quite unlikely to happen any time soon.

    • deadbabe2 hours ago
      Stop with the anthropomorphizing, stop playing house. There is no team of agents, they are just tools and processes running in a computer for Christ’s sake. Blow away that one senior engineer and you have nothing. Better to have a team of 6.
      • arcanemachiner2 hours ago
        You are staring at glowing pixels at a screen, then getting angry and pressing on pieces of plastic or a slab of glass to express your emotions in response.

        You live in a world of ever-changing metaphors. Get used to it.

    • QuercusMax3 hours ago
      For me - I'm 43, and used to be an extremely productive Java/Swing developer after 15 years of experience, and I knew all my tools inside and out. But I no longer work at that company (which doesn't exist any more), and it takes me a lot longer to learn how to be effective with the new tools I'm using simply because I haven't had a decade to learn the ins and outs of the new environments I'm working with.

      So AI saves me immense amounts of time figuring out how to write proper syntax, remembering the ins and outs of unit testing frameworks, etc. If I stick around for a year or three I'm sure I'll get much much faster and learn these tools better.

  • Teknoman1175 hours ago
    > AI-users thus become less effective engineers over time, as their technical skills atrophy

    Based on my experience, I think this will prove more true than not in the long run, unfortunately.

    Professionally, I see people largely falling into two camps: those that augment their reasoning with AI, and those that replace their reasoning with AI. I’m not too worried about the former, it’s the latter for whom I’m worried.

    My mom is a (US public school) high school teacher, and she vents to me about the number of students who just take “Google AI overview” as an absolute source of truth. Maybe it’s just the new “you can’t cite Wikipedia”, but she feels that since the pandemic, there’s a notable decline in the critical thinking skills of children coming through her classes.

    We have a whole generation (or two) of kids that have grown up being told what to like, hate, believe, etc. by influencers and anonymous people on the internet. They’d already outsourced their reasoning before LLMs were a thing. Most of them don’t appear to be ready to constructively engage with a system that is designed to make them believe they are getting what they want with dubious quality.

    • Ancalagon4 hours ago
      > My mom is a (US public school) high school teacher, and she vents to me about the number of students who just take “Google AI overview” as an absolute source of truth.

      I notice many of the adults in my life are doing this now as well.

  • giobox7 hours ago
    Anecdotally, it feels like something materially changed in the US software hiring market at the start of this year to me. It feels like more and more businesses are taking a wait and see approach to avoid over-investing in human capital in the next few years.

    It also feels like the hiring "signal", which was always weak before, is just completely gone now, when every job you do advertise receives over 500 LLM written applications and cover letters that all look and feel the same.

    The pro-athlete comparison in this article is bit silly IMO- there are obvious physical body issues that occur with aging if you rely on your muscles etc to make money. If you compare to other fields of knowledge work, such as say law or medicine, there are loads of examples of very experienced, very sharp operators in their 40s and 50s.

    • tuesdaynight2 hours ago
      Anecdotally as well, but I believe that US companies are hiring in big waves in India and other developing markets. I know people who went from zero contacts to daily messages from big tech recruiters in these countries. I've seen people saying that is the result of that specific US section that expired last year, but I'm a layman, so I don't have the knowledge to debate the reasons.
    • 7 hours ago
      undefined
    • jayd167 hours ago
      Honestly, AI doesn't feel like it's affecting hiring needs from the trenches. We don't have engineers sitting on their hands because AI wrote up everything the leadership could imagine.

      Instead, we get an economy that feels like it's on the cusp of a fall or at the very least a roller coaster. Poor tax incentives to hire. ZIRP is long gone. And the hiring managers are overrun with slop.

      But bosses are happy to say it's AI because that makes you sound in control.

      • mainmailman5 hours ago
        Thank you, it's been all but confirmed that lot of "AI layoffs" are due to reaching a workforce equilibrium from Covid era over hiring.

        Saying AI for anything, good news or bad news, is a get out of jail free card for execs who want to appease shareholders.

        • conductran hour ago
          Who did they overhire? Like, Covid didn’t just increase the number of people in the field. Prior to Covid there was a so called talent shortage. The hiring that did occur was mostly net zero in aggregate. Workers got poached, grads got hired, compensation went to far up. And, that’s what they mean by overhired. They over paid. They now see the benefits of hiring cheaper talent on another continent. Cheaper talent that can use the same tools as you are going to use. AI equalized a lot of talent, the US labor doesn’t have the edge it once did. In a sense, this should have been happening at higher rates much earlier than it did but for some reason investors saw value if you paid big salary to smart people in one part of California for a very long time. Now, the thing that should have happened is happening. And they also realize it’s not limited to California, turns out even salaries in Alabama are high compared to other parts of the world.

          There is also much more productivity. But I’m not sure it’s really a driving force yet as with the new productivity people are still just trying to do more with it which doesn’t translate to efficiency. Yet. It might once AI loses its wow factor and is just status quo. I feel like this is fast approaching but still may be a few years away.

      • an hour ago
        undefined
    • Daishiman6 hours ago
      > Anecdotally, it feels like something materially changed in the US software hiring market at the start of this year to me. It feels like more and more businesses are taking a wait and see approach to avoid over-investing in human capital in the next few years.

      My guess is companies overhired in COVID and between that experience and an uncertain market they don't want to make the same mistake twice.

      • dominotw6 hours ago
        where did the excess labor force suddenly materialize in covid.
        • jjmarr6 hours ago
          the "learn to code" campaign began ramping up in 2013. If you started undergrad in 2016 you would've graduated right into the covid market.

          https://en.wikipedia.org/wiki/Learn_to_Code#Policy_impact

          I think the hype peaked around 2016 where Democrats were portrayed as out of touch for saying laid off coal miners could just "learn to code". By 2019 it was a cliché used to mock laid off journalists on Twitter.

          • 6 hours ago
            undefined
        • vineyardmike5 hours ago
          2008 had ~30k CS graduates.

          2015 had ~50k CS graduates.

          2021 had ~100k CS graduates.

          You can extrapolate the rest.

          • dominotw5 hours ago
            thats only a fraction of all the layoffs
            • Terr_4 hours ago
              Someone may graduate with that degree only once in their lifetime--or not at all--and be laid off multiple times. :p

              We might be able to make a flow-comparison for "entering the field" versus "exiting the field forever", but layoffs don't really measure the latter.

        • francisofascii3 hours ago
          Anecdotally, our firm's Covid hires were just okay. The recent hires are better. So my hunch is the weaker CS candidates were able to get jobs back during Covid, while today, they are left out.
        • LPisGood6 hours ago
          This is a great question that rarely gets answered. It’s partially that a ton of student students went to school for computer science because they saw how much money could be made, another fraction is people that switched into software from related fields, maybe with a boot camp or something.
        • shimman6 hours ago
          It didn't. The elites never want to admit that they have failed to efficiently use capital for the last 40 years. It's always the fault of workers that should never be trusted. Just continue trusting the elites as they ruined US manufacturing jobs, surely the same institutions won't fail the workers again!
    • nine_zeros7 hours ago
      [dead]
  • harimau7777 hours ago
    I keep reading about how AI will be fine because people can just retrain for different careers. However, I never read what those careers are or who is going to pay for retraining.

    I certainly don't have the money or time to go back to college and start a new career at the bottom.

    • adjejmxbdjdn7 hours ago
      The argument is that “that’s what always happened in the past”.

      Which is true, but it’s true as long as it’s not true.

      The classic example of how drastically this kind of thinking can fail is Malthusian theory, that populations would collapse because food growth was linear while population growth was exponential. This was true for all of history until Malthus actually made this observation.

      At a mechanistic level, the “we have always found other jobs” argument misses that the reason we’ve always found other jobs is because humans have always had an intelligence advantage over automation. Even something as mechanical as human inputs in an assembly line was eventually dependent on the human ability to make tiny, often imperceptibly, adjustments that a robot couldn’t.

      But if something approximating AGI does work out, human labor has absolutely no advantage over automation so it’s not clear why the past “automation has created more human jobs” logic should continue.

      • rayiner7 hours ago
        > Which is true, but it’s true as long as it’s not true.

        It also isn't true. The story of the last 50 years has been that technology, especially computer and communications technology, has facilitated the concentration of wealth. The skilled work got computerized, or outsourced to India or China. That left U.S. workers with service jobs where they have much lower impact on P&L and thus much less leverage.

        In my field, we used to have legal secretaries and law librarians and highly experienced paralegals. They got paid pretty well and had pretty good job security because the people who brought in the revenue interacted with them daily and relied on them. Now, big firms have computerized a lot of that work and consolidated much of the rest into centralized off-site back-office locations. Those folks who got downsized never found comparable work. They didn't, and couldn't, go work for WestLaw to maintain the new electronic tools. The law firms also held on to many of them until retirement or offered them early retirement packages, and then simply never filled the positions. It used to be a pretty solid job for someone with an associates degree, and it simply doesn't exist anymore.

        The only thing keeping the job market together is the explosion in healthcare workers. My Gen-Z brother and sister in law are both going into those fields. In a typical tertiary American city, the largest employers are the local hospital and perhaps a university or community college. Both of those get most of their revenue directly or indirectly from the government. It's not clear to me how that's sustainable.

        • vips7L3 hours ago
          Healthcare really seems like the only safe direction anymore. They're needed and a human is still required to physically do it.
          • upupupandaway3 minutes ago
            My wife is a nurse and keeps in touch with her school professors. They said that the number of people flooding into healthcare careers is more than most colleges can handle, and is starting to cause supply glut in some roles.
        • bilbo0s5 hours ago
          >It's not clear to me how that's sustainable

          If it makes you feel better, I'm pretty sure it isn't sustainable. (But I'm not an economist so take that with a block of salt.)

          I don't think anyone has the answers. It's just some of us are honest enough to concede we have no answers, while others promote an answer that aligns best with their belief system.

          "It'll all work out."

          "It's the immigrants/blacks/jews/whatever dragging us down."

          "Nothing's going to happen and we can all continue doing the work we always have."

          "Burn the rich."

          Etc etc.

          Not a lot of serious attempts out there at even getting a hand on the issues, let alone fixing the issues.

      • nemomarx7 hours ago
        I'm also pretty sure in the past industrial transitions, many of the people who lost their jobs at the start of the change never found better ones. It took a generation or so for new opportunities to really be found and fine tuned and you're competing for those new roles with younger people anyway.

        If ai does take a lot of white collar work, is it a lot of comfort that maybe jobs in a very different sector will be better in 20 years?

        • rayiner7 hours ago
          Did the younger people find better jobs? You used to have all these jobs for people who were maybe a bit smarter than average with good judgment. In the 1990s, the local community college used to advertise associates degrees for paralegals. That's a job that doesn't exist in the same way anymore thanks to computers. Now it's become an internship for kids with top credentials before they go to law school. Which is fine for them, but what about everyone else?

          It seems to me like all of these people are flocking now to healthcare fields. That seems totally unsustainable.

          • joe_mamba5 hours ago
            >It seems to me like all of these people are flocking now to healthcare fields. That seems totally unsustainable.

            Why? There will never be a shortage of sick/dying people. So medical staff, and also undertakers, aren't going anywhere.

            • rayiner4 hours ago
              Because most healthcare spending comes from tax dollars.
              • jhrmnn3 hours ago
                Is this a different route to the universal basic income scenario?
            • deflator4 hours ago
              My understanding is that healthcare keeps growing because the large Boomer generation is aging. When they have passed though, then we should see a corresponding slide in healthcare growth
        • marcosdumay6 hours ago
          Not in all past industrial transitions.

          But yes, the argument has been wrong often enough that the people still repeating it as a rule should be mocked and ashamed.

      • bobthepanda7 hours ago
        It’s also not that true, and highly dependent on a lot of factors.

        Anecdotally I see a lot of schadenfreude online about tech jobs after a decade or two of lecturing everybody from Appalachians in coal country, to Midwestern autoworkers, that they should just “learn to code.”

      • rurp6 hours ago
        Totally agree, and would add another way “that’s what always happened in the past” is a terribly weak argument. Things might have always worked out at the societal level so far, but very often do not at the individual level. Countless successful craftsmen have had their livelihoods ruined by technological changes and spent their remaining years impoverished. How many people funding AI would be willing to throw their own life away for the good of some future strangers that may or may not be born? I'm pretty sure the answer is <=0.
      • antisthenes7 hours ago
        > The classic example of how drastically this kind of thinking can fail is Malthusian theory, that populations would collapse because food growth was linear while population growth was exponential. This was true for all of history until Malthus actually made this observation.

        Malthusian observation can still be true...It only has to be true once, and the only reason people say it isn't right now is due to industrial fertilizers and short memories.

    • HumblyTossed7 hours ago
      > However, I never read what those careers are or who is going to pay for retraining.

      There aren't any careers and if there were, you would have to pay. Corporations certainly won't except under extremely rare situations where they have to to compete.

    • rayiner7 hours ago
      It's not going to happen, just as it didn't happen for skilled industrial workers whose jobs got outsourced to China. The government will pay just enough in welfare to keep the situation manageable. Then they'll demonize you in the culture, as a Luddite, etc.
    • bdcravens7 hours ago
      The same is true of the industries that software disrupted.
      • 7 hours ago
        undefined
    • ahartmetz7 hours ago
      The second part seems obvious to me: the ones who are getting retrained. If it's some kind of formal education, depending where you are, maybe the state at least for part of it.

      Education in what, though? No idea. And if there was one answer, and it's true that there will be fewer software developers, you'd likely be competing with many people for few jobs.

      • radiator2 hours ago
        Start with the basics, perhaps? Languages, Mathematics, Geography, Economy, History.
    • insane_dreamer6 hours ago
      > I never read what those careers are

      Exactly. I have yet to read a single logically sound argument that even gives a hint of what those professions/jobs might be (remember, they have to be plentiful enough to employ large numbers of people, so "I quit my corporate job and making more as a TikTok influencer" doesn't count). Remember that a new profession has to open up new hitherto unknown revenue streams otherwise there are no companies who will pay you.

    • kypro7 hours ago
      Also, it's not necessarily true that there will be other great careers available. This seems to just be an assumption people are making.

      Of course, there are jobs that will still require human labour for some time yet, but in reality a lot jobs that require physical human labour are now done in other parts of the world where labour is cheaper.

      Those which cannot be exported like plumbing or waitressing only have limited demand. You can't take 50% of the current white-collar workforce and dump them in these careers and expect them to easily find work or receive a decent wage. The demand simply does not exist.

      Additionally, at the same time as white-collar jobs are being lost an increasing number of "low-skill" manual labour jobs are also being automated. Self-checkout machines mean it's harder to get work in retail, robotaxis and drone delivery will make it harder harder to find work in delivery and logistics, robots in warehouses will make it harder to find warehouse jobs.

      It seems to me there is an implicate assumption that AI will either create a bunch of new well-paid jobs that employers need humans for (which means AI cannot do them) and jobs which cannot be exported abroad for cheaper. What well-paid jobs would even fit the category of being immune to AI and immune to outsourcing? Are we all going to be really well paid cleaners or something? It makes no sense.

      A lot of the advice we're seeing today about retraining in construction worker or plumber seems to assume that there's an unlimited demand for this labour which there simply is not. And even if hypothetically there was about to be a huge increase in demand for construction workers, it would take years to even have the machinery, supply chain and infrastructure in place to support the millions of people entering construction.

      The most likely scenario is that people will lose their jobs and will be stuck in an endless race to the bottom fighting for the limited number of jobs that are left in the domestic economy while everything else is either outsourced or done by robots and AI.

      The better advice is to start preparing for this reality. Do not assume the government will or can protect you. When wealth concentrates corruption because almost inevitable and politicians have families to look after too.

      Please take this seriously. Even if I'm wrong it's better to prepare for the worse rather than to assume everything will be find and you'll be able to retrain into a new well-paid career.

      • tavavex38 minutes ago
        > A lot of the advice we're seeing today about retraining in construction worker or plumber seems to assume that there's an unlimited demand for this labour which there simply is not.

        I think that most advice like this is individual - not systemic. We all won't fit into the remaining fields when white collar work gets less demand, but someone who's just pivoting now still could. There's no systemic solution that will actually be implemented. The only advice left to give to people is to not be too late. There's only so many people that can be trained to do this range of work (has a physical component that is difficult to automate + can only be done here + has an education/certification moat) just based on spots in educational programs, and they'd probably be better off getting on that sooner than later if they think that their current job is going to be in the crosshairs soon.

      • jhrmnn2 hours ago
        When no work is safe from mechanization, surely the value of labor wrt capital must fall, and the societal pressure on redistribution will rise. The ultimate outcome of technological progress is either extreme inequality or massive redistribution
        • tavavex34 minutes ago
          I feel like by the time the societal pressure starts rising in any major way, it won't matter anymore. By that point, the people who will profit will become gods with a prepared response for every action the lower classes could take.
      • whodidntante4 hours ago
        +1

        50% of the workforce was in farming near the end of the 1800's. Today, 2% 40% of the workforce was in manufacturing early to mid 1900's. Today 8% 60+% of the current workforce is white collar. What will it be in 20 years ?

        LLM's are only a couple of years old, we have no idea where this will go. Maybe it will be a big hallucination, maybe we are looking at the very early version of farm and manufacturing machines.

        The ENIAC was larger than a person, we now have watches that are significantly more powerful. Maybe in the future, your Apple watch will have more compute than several racks of H100's.

        When they came for the farmers, no one else cared - everyone got cheap and bountiful food. When they came for the manufacturers, no one else cared - everyone got cheap and bountiful products. Now they are coming for the white collar workers, and their highly paid laptop lifestyles.

        Who is left to care ? The billionaires ?

      • Mezzie2 hours ago
        > Are we all going to be really well paid cleaners or something?

        I work for a corporation that includes cleaning brands and I've got bad news...

      • 2 hours ago
        undefined
    • keybored7 hours ago
      The most future-proof “career” right now is having money. At least multiple million dollars. That’s a skill that is very much in demand.
      • RealityVoid5 hours ago
        Whoo, deff a field where I would try breaking in.
    • djeastm7 hours ago
      At least in the US, the only major non-AI growth field seems to be healthcare to deal with the swell of baby boomers living longer than people have before.

      But if we're waiting to be paid to retrain there, I wouldn't hold our collective breath.

      • bluGill5 hours ago
        Baby boomers had already started the face of dying though. The next generation is still going to be right there. That generation is smaller. These people will always be dying. However, I wouldn't hold my breath if you're a young person in that field. Maybe but maybe not
    • realharo7 hours ago
      Yeah, that's just a copium answer from people who simply want to hand wave away the issue instead of admitting they have no good answers.

      Like a politician who's asked about this in a town hall, but thinks that "our plan is to do absolutely nothing" doesn't sound very appealing.

    • agentultra7 hours ago
      This is the story that's been written since the Luddite revolts, as far as I know. The successors in that case were the capitalists who spent a significant amount of time and money convincing the constabulary and political figures to side with them. People were shot and jailed in the worst cases. The best case, workers were left without work or sent off to work-houses where they became indentured servants to the state.

      The last work-house closed in the 1930s.

      That all started not because people were afraid jobs were going to be replaced by the new loom. People had been using looms for centuries. They were protesting working conditions: low wages, lack of social protections when people were let go, child labor, work houses, etc. There were no labor laws at the time to protect workers... but there were these valuable new machines that the capital owners valued greatly. The machines were destroyed as leverage: a threat.

      Since the capitalists ultimately "won" that conflict it has been written, by technocrats, that technological progress is virtuous and that while workers will initially be displaced the benefit to society will be enough such that those displaced will find productive work else where.

      But I think even capitalist economists such as Keynes found the idea a bit preposterous. He wrote about how the gains in productivity from technological advances aren't being distributed back to workers: we're not working less, we're working more than ever. While it isn't about displacement of workers, it is displacement of value and that tends to go hand in hand.

      I think asking, "Where do I go?" is a valid question. One that workers have been trying to ask since the Luddites at least. Unfortunately I think it's one that gets brushed under the rug. There doesn't seem to be much political will to provide systems that would make losing a job a non-issue and work optional.

      That would give us the most leverage. If I didn't have to work in order to live I could leave a job or get displaced by the latest technological advancement. But I could retrain into anything I wanted and rejoin the work force when I was good and ready. I wouldn't have to risk losing my house, skipping meals, live without insurance, etc.

    • dominotw6 hours ago
      AI cannot create art by itself.
      • dugidugout2 hours ago
        Hopefully the capital owners care to tell the difference.
    • phyzix57617 hours ago
      I think the idea of being an employee is fundamentally changing. Not saying its good or bad but it's shifting to a more entrepreneurial phase where people have to step out of their 9 to 5s and find ways to deliver value that others want to pay for.

      We saw this pre-ai with uber and door dash. I think as AI automation dies down and most companies are competing at a near optimal level with the new tools we'll need humans again in more traditional roles to build the next generation of innovations. And then the whole cycle will repeat.

      • mancerayder5 hours ago
        That's a lot of SV-speak. How exactly do people step into an entrepreneurial phase? They're at work in corporate settings with fixed defined roles. Most workplaces are not many-hatted-donning startup environments, but restricted roles where there are deliverables, deadlines, meetings, etc. Which leaves out of hours for "entrepreneurship" whatever that is.

        Github project work on the weekends? That's not possible for most people in their mature/family years (or shouldn't be necessary - what about living life??)

        • ProfessorLayton3 hours ago
          >That's a lot of SV-speak. How exactly do people step into an entrepreneurial phase?

          Almost half of U.S. employment is from small businesses (250 or less employees). That's means there's a lot of entrepreneurship happening already. I have lots of family running their own small businesses (trades), and it's a lot of work, and doesn't necessarily pay as well as a cushy corporate job, but what I'm trying to say is lots of people can and do start their own enterprise.

          Yes, lots of them will fail at running their own business, but it's not like corporate jobs are getting any safer either.

        • Terr_an hour ago
          > That's a lot of SV-speak. How exactly do people step into an entrepreneurial phase?

          Oh, you simply decide to use grit and willpower to pull yourself up by your bootstraps, placing some calls to people you met at certain parties aided by a small 6 digit loan from your family. /s

          I see "employees should be more entrepreneurial" as a kind of victim blaming, and I'm especially cynical if the concept arrives via groups that spent the last several decades putting up barriers to entry, drafting non-compete contracts, capturing regulators, and basically shutting out entrepreneurship.

      • lowmagnet7 hours ago
        Uber and Doordash are both examples of abusing workers and their resources to externalize costs on the worker.
        • phyzix57615 hours ago
          What about people who have been out of work for a year and all they can do right now is deliver for Uber and Doordash so they can make rent and put some food on the table?

          Is it ideal working conditions? No, but its better than nothing, you can set your own hours, and you can leave when the next opportunity comes.

          • RealityVoid5 hours ago
            His point is that it's not entrepreneurship, it's employment.
      • RealityVoid5 hours ago
        > We saw this pre-ai with uber and door dash.

        Oh, yeah? Did the Uber drivers and door dashers accrue the surplus value?

      • vips7L3 hours ago
        How the hell is uber or door dash entrepreneurial?
  • kixiQu5 hours ago
    > If you work in construction, you need to lift and carry a series of heavy objects in order to be effective. But lifting heavy objects puts long-term wear on your back and joints, making you less effective over time. Construction workers don’t say that being a good construction worker means not lifting heavy objects. They say “too bad, that’s the job”.

    My parents were both construction workers. There is an understanding that you cannot lift heavy objects forever. You stop lifting objects and move to being a foreman, a supervisor... and if you are uncomfortable learning to get others to do work that you before have done yourself, you burn out your body entirely and the consequences are horrible.

    This is factual reality, but it is also a parable that has been important for me to internalize about delegation in my own career. It is not irrelevant to AI use, but I don't think it slots onto it totally as neatly.

    • maerF0x038 minutes ago
      Also worth noting that, so long as it's reasonable, lifting heavy objects makes you stronger, whereas the current hypothesis on using AI is approximately the opposite.
    • ASalazarMX5 hours ago
      Software developers are more architechs than plain programmers. You wouldn't make an architech lift heavy things, you want they to design how those heavy things are used.
  • joduplessis7 hours ago
    I really wish seemingly intelligent people would stop using the abstraction analogy (like the article does). The key word is: determinism. Every level of abstraction (inc. power tools, C, etc.) added a deterministic layer you can rely on to more effectively do whatever it is that you're doing - same result, every time. LLM's use natural language to describe programming and the result is varied at the very best (hence agents, so we can brute force the result instead). I think the real moat is becoming the person who can actually still program.
    • phpnode7 hours ago
      People always say this but it’s misguided imo. Yes LLMs are not deterministic, but that’s totally irrelevant. You aren’t executing the LLMs output directly, you’re using the LLM to produce an artefact once that is then executed deterministically. A spec gets turned into code once. Editing the spec can cause the code to be updated but it’s not recreating the whole program each time, so why does determinism matter?
      • michaelrpeskin6 hours ago
        In my experience, I'm using LLMs as my abstraction to "junior engineer". A junior engineer isn't deterministic either. I find that if you treat the LLM output like a person's output, you're good. Or at least in my projects, it's been very successful. I don't have it generate more code than I can review, or if I give it a snippet to help me fix it, if it ends up re-writing it like an ambitious engineer would do, I tell it to start over and make minimal changes.

        I guess I'm not spun up about the determinism because I've been working at the "treat it like a person" level more than the "treat it like a compiler" level.

        To me, it's really like an engineer who knows the docs and had a good memory rather than infallable code generator.

        I work at a small company, so we don't have tons of processes in place, but I imagine that if you already had huge "standards" docs that engineers need to follow, then giving the LLM those standards would make things even better.

        • skydhash4 hours ago
          The thing is you can quickly teach a Junior how to respect a specification contract, so that with very minimal oversight, you get the wanted implementation. And after a few years (or months), the communication overhead get shorter. What would have been multiple rounds of meetings and review sessions are a short email and one or two demos.
          • QuercusMax3 hours ago
            What I've been learning as a 20% "harness engineer" is that in order to get the models to "learn" you need to add both documentation and static checks, as well as often custom skills. My main project at work has issues where the AI will often get super confused and step on itself trying to run tests - so the answer is writing better docs (AGENTS.md) and providing deterministic tools to work with the projects.

            Large software projects (I'm thinking google3) often have large amounts of both of those things, as they're always getting new developers joining.

      • AstroBen6 hours ago
        If it's not deterministic you can never fully trust it. In a deterministic abstraction I don't need to audit the lower levels.
        • HDThoreaun7 minutes ago
          You fully trust your coworkers?
        • ex-aws-dude5 hours ago
          Who said you need to trust it? Reviewing code is still way faster than writing code.
          • bluefirebrand4 hours ago
            > Reviewing code is still way faster than writing code.

            Writing code results in a much better understanding of the code than reviewing it

            In fact I would say that in large complex codebases, in order to develop the same understanding of what the code is doing might actually take longer than writing it from scratch would have

      • mrbananagrabber5 hours ago
        this is the way LLMs _should_ be used, as an assistant to create reliable, deterministic code. and honestly, they're fantastic when used this way. build the thing you need with the LLM, then put the LLM away.

        but in practice, the current obsession with agents means people are creating applications that depend entirely on sending requests to LLMs for their core functionality. which means abandoning the whole idea of deterministic software in favor of just praying that all of the prompts you put around those API requests will lead to the right result.

      • udave4 hours ago
        try distributing this spec amongst your team members, ask each of them to drive it to completion. no follow up edits. deploy to individual environments and then run a rigorous test suite against all of the deployments. see if all of them behave the same way.
        • phpnode3 hours ago
          They won't. So what? This is not how specs are used, no one is saying that they are a replacement for source code.
      • ex-aws-dude5 hours ago
        Exactly, the argument makes sense if its about inference at runtime

        But that's not the case here

      • knivets5 hours ago
        how do you know the artifact is correct?
    • NiloCK5 hours ago
      I grant that there's a definition of abstraction that LLMs don't fall into. But people describing LLMs as another abstraction layer aren't all misunderstanding this. Instead, they are using the term ... more abstractly.

      EG: How did Mark Zuckerberg make software five years ago?

      He's as capable of opening up an editor as I am, but circumstance had offered him a different interface in terms of human resources. Instead of the editor, he interacts with those humans, who produced the software. This layer between him and the built systems is an abstraction, deterministic or not.

      Today, you and I have a broader delegation mandate over many tasks than we did a few years ago.

      • indiosmoan hour ago
        The way I frame this is that LLMs are not replacing the tools, whic are are deterministic. They are replacing the humans, which are themselves non-deterministic, as in your Zuckerberg example.
    • qnleigh5 hours ago
      LLM's don't have to achieve perfect reliability to replace lots of work. They just have to reach the balance of reliability and cost suitable for a given task. This will depend on the task.
    • ansk5 hours ago
      I see what you're getting at, but determinism isn't the right word either. LLMs are fundamentally deterministic -- they are pure functions which output text as a function of the input text and the network parameters[1]. Depending on your views on free will, it could be effectively argued that humans are deterministic as well.

      The concept you're touching on is the idea that LLMs (and humans) are functions which are inscrutable. Their behavior cannot be distilled into a series of logical steps that you can fit in your head, there are no invariants which neatly decompose their complexity into a few interpretable states, and the input and output spaces are unstructured, ambiguous, underspecified, and essentially infinite. This makes them just about impossible to reason about or compose using the same strategies and analysis we apply to traditional programs.

      [1] Optionally, they can take in a source of entropy to add nondeterminism, but this is not essential. If LLM providers all fixed their prng seeds to a static value, hardly anyone would notice. I can't imagine there are many workflows which feed an LLM the exact same prompt multiple times and rely on the output having some statistical distribution. In fact, even if you wanted this you may just end up getting a cached response.

      • udave4 hours ago
        Let's be real, if you and I both ask claude to generate a feature on the same project, what are the chances that it spits out 100% replicated code? But if we are to build the project using a Dockerfile, we will get the same binary and the same image. Products around LLMs are non deterministic unlike compilers.
        • ansk3 hours ago
          I can assure you that a fully deterministic and equally effective claude is possible to build. And yes, that would mean identical prompts would yield 100% identical output 100% of the time. It would still make the occasional logical or factual error, but it would do so deterministically. Would this solve any of the problems with building reliable programs using LLMs?
        • pzo3 hours ago
          it's nondeterministic because we chosen it by having higher 'temperature' in settings. I bet if you run open weights model with temperature 0 and on the same device the same prompt and turn off parallelism you will have more deterministic result (excluding some floating point operations).
      • skydhash4 hours ago
        > Optionally, they can take in a source of entropy to add nondeterminism, but this is not essential. If LLM providers all fixed their prng seeds to a static value, hardly anyone would notice

        Everyone added /dev/random to their offerings, so every LLM tools for coding are non deterministic.

    • arecsu4 hours ago
      There's something to be said about the fact that the very people who would use deterministic layers to build stuff are... non-deterministic. We, as humans, have our set of pros and cons, wins and failures. Even the most brilliant coders on earth will make mistakes from time to time. I often fail to see this getting accounted in any conversation when there is a critique towards LLMs, as if we humans are not flawed in our own ways, with a huge degree of variance across individuals. Good and bad code existed prior to LLMs. If you're hiring someone to do code, you're basically using some heuristics to trust this person will do a good job. But nothing is ever guaranteed 100% deterministically ever. Without thinking it that much, LLMs will sometimes produce better code and manage systems that some people who are earning salaries out there. Possibly sub-par developers if we were precise, but professionals in the meaning of the word (that are being paid to do work).

      At the end of the day, what matters is how willing the person behind a given task is when it comes to deliver quality work, how transparent and honest they are, to understand requirements, and a pleasure to work with along other humans. AI/LLMs are just extra tools for them. As crazy as it might sound, but not so many people are willing to push boundaries and deliver great work. That is what makes the difference.

    • danaw7 hours ago
      every time a person uses the abstraction argument, an angel dies
  • deferredgrant8 minutes ago
    The practical lesson is probably to build adjacent judgment: product sense, domain expertise, systems taste, and communication. Pure implementation may be the most exposed slice.
    • upupupandaway5 minutes ago
      This is, to me, the "correct" answer. I am starting to see the role evolve to "software producer", which, like music producers, direct an entire problem space using tools (Claude / ProTools + presets) and occasionally bring some specialized musicians for some advanced parts. Most commercial software will be built this way, much like almost all commercial music already is.
  • fooker7 hours ago
    If by software engineering, one means typing code character by character into a text editor, sure it's going to be difficult to find someone to pay you for it.

    If you mean creating software, well we are creating more software than ever before and the definition of what software is has never been so diverse. I can see many different careers branching off from here.

    • xtracto7 hours ago
      We are experiencing what Civil Engineers experienced going from slide rules to calculators. Or electrical engineers going from manual circuit path drawing to CAD tools.

      The interesting thing to me is that, Software Engineering will have to evolve. Processes and tools will have to evolve, as they had evolved through the years.

      When I was finishing university in 2004, we learned about the "crisis of software " time, the Cascade development process and how new "iterative methods" were starting.

      We learned about how spaghetti code gave way to Pascal/C structured prpgramming, which gave way to OOP.

      Engineering methods also evolved, with UML being one infamous language, but also formal methods such as Z language for formal verification; or ABC or cyclomatic complexity measurements of software complexity.

      Which brings me to Today: Now that computers are writing MOST of the code; the value of current languages and software dev processes is decreasing. Programming Languages are made for people (otherwise we would continue writing in Assembler). So now we have to change the abstractions we use to communicate our intent to the computers, and to verify that the final instructions are doing what we wanted.

      I'm very interested to see these new abstractions. I even believe that, given that all the small details of coding will be fully automated, MAYBE we will finally see more Engineering (real engineering) Rigor in the Software Engineering profession. Even if there still will be coders, the same way there are non-engineers building and modifying houses (common in Mexico at least)

      • hnthrowaway03157 hours ago
        Calculators and CAD tools do not give you non-deterministic answers. Both of them simply automate part of the manual work for them without creating anything "new". I haven't used CAD tools but I did use some level editors such as Trenchbroom -- I think what is automated is the 3d shapes that you want to make -- e.g. back in the day of '96, when ID Software is creating Quake, there was very little pre-drawn shapes in the level editor and they have to make the blocks by themselves, thus it is very difficult and time consuming to make complex shapes such as curved walls and tunnels. Then better tools were invented and now it is much easier to create a complex shape. But you don't type "a Quake level with theme A, and blah blah" and then you get a more or less working level -- this is what AI is doing right now.

        I think the right analogy to calculators and CAD tools, is IDE with Intellisense for SWE -- instead of typing code one char by one char, we can tab to automate some part of it.

        But I agree with your consensus -- SWE is changing, whether we like it or not. We need to adapt, or find a niche and grit to retirement.

        • fooker5 hours ago
          > non-deterministic answers

          It doesn't make sense to get hung up on this aspect of LLMs. We prefer non deterministic so far because it tends to work slightly better even if it is completely possible to ask for a temperature=0 deterministic answer.

          With more scale and research, at some point you'll get results that are both useful and deterministic, if it's not already the case.

          • shimman5 hours ago
            It absolute makes sense to get "hung" up on something when it comes to planning society around it JFC. I'm with the other commentator, your understanding of these tools should be taken into question since you seem to be reading the tea leaves of statistical noise.
    • jerf7 hours ago
      In 2020, there are two companies that are competitors with each other. They each employ 100 programmers to do their job, and we all know how those organizations operated; perpetually behind, each feature added generating yet more possible future features, we've all lived it and are still largely living it today.

      In 2026, both companies decide that AI can accelerate their developers by a factor of 10x. I'm not asserting that's reality, it's just a nice round number.

      Company 1 fires 90 of their programmers and does the same work with 10.

      Company 2 keeps all their programmers and does ten times the work they used to do, and maybe ends up hiring more.

      Who wins in the market?

      Of course the answer is "it depends" because it always is but I would say the winning space for Company 1 is substantially smaller than Company 2. They need a very precise combination of market circumstances. One that is not so precise that it doesn't exist, but it's a risky bet that you're in one of the exceptions.

      In the time when the acceleration is occurring and we haven't settled in to the new reality yet the Company 1 answer seems superficially appealing to the bean counters, but it only takes one defector in a given market to go with Company 2's solution to force the entire rest of their industry to follow suit to compete properly.

      The value generation by one programmer that can be possibly captured by that programmer's salary is probably not going down in the medium and long term either.

      • rayiner7 hours ago
        Your hypothetical ignores the distribution of programmer talent. Company 1 can pay more per person and hire 10x programmers, who can then leverage AI to produce the same or more as Company 2.

        We have seen this in other knowledge industries. U.S. legal sector job count is about the same today as it was 20 years ago. But billing rates have exploded and revenues in the 200 largest firms have increased more than 50% after adjusting for inflation. Higher-end law firms have leveraged technology to be able to service much more of the demand and push out smaller regional competitors.

        • jerf5 hours ago
          Of course it does. It ignores a lot of things. Mostly I just want to present the view that things aren't entirely hopeless and the entire industry is doomed to contract by 90% because of AI. Your legal system point also fits in precisely with what I'm trying to convey, just in a different direction.
        • fooker6 hours ago
          I think paying significantly more was a very localized thing that happened for AI researchers who were familiar with the alchemy that made GPT4 suddenly work much better than anything else seen before.
      • boredatoms7 hours ago
        This resonates strongly with me, in that all that extra margin has to be spent on something other than dividends
      • 6 hours ago
        undefined
    • harimau7777 hours ago
      My concern would be whether creating that software pays enough to keep up with skyrocketing costs of living. In the past, the jobs created by automation have generally been lower paid with less autonomy.
      • lanstin7 hours ago
        This problem is not a software engineering problem nor an AI problem but a problem of the balance of power between working hard vs. investing. If the people who believe in working hard organize and slow down the tendency to rig everything for investors, then the markets should stabilize at a more generally prosperous place.
        • rayiner6 hours ago
          The balance of power is dictated by economic facts, not by organizing or politics. Auto workers in 1950 weren't better organized than auto workers in 2026. They just had more leverage because they weren't competing with auto workers in China. Likewise, Silicon Valley isn't paying people writing web apps $$$ because those workers are organized. They are doing it because they don't have a feasible alternative. If AI enables them to do more with less, they'll take that option.
      • aleph_minus_one7 hours ago
        > My concern would be whether creating that software pays enough to keep up with skyrocketing costs of living.

        You might need to relocate to a place with much lower costs of living.

        This was the idea behind remote working discussed during COVID-19 times:

        - the company can pay less money because the employee is living at a much cheaper place than the expensive city where the company is located

        - on the other hand, even with a smaller salary, the employee has more money at the end of the month because of the smaller costs of living

        So both sides win.

        • passivepinetree6 hours ago
          Ignoring the preference of people generally wanting to live in HCOL areas, this only works if every company hires equally from LCOL areas. One of the benefits of living in a HCOL area is access to the job market it provides. It's much easier to get hired for a software position living in San Francisco than it is living in Deming, New Mexico.
          • bluGill5 hours ago
            More importantly, in San Francisco, there are a lot more opportunities than in coming. I've never been to either city (i'm not going to come to the conference I was at because I never left the hotel.) however, I can still tell you confidently that if you have a weird hobby, you're much lower than you can find other people with that interest, stores that sell the things you need to complete the hobby, and all those other things in life that you want. If you love doing the types of things people in Deming do, well, it's a great life, I'm sure. However, as soon as you want to do something off the wall, well, you may not even find enough people in Deming to have your cricket team, while I have no doubt that San Francisco has a team that you could join.
        • Natfan7 hours ago
          but moving to a lower COL area can reduce that amount of public and private services one gets access to, no? network connectivity will, for example, likely be worse out in the sticks
        • harimau7777 hours ago
          [flagged]
          • aleph_minus_one7 hours ago
            > Unfortunately, in America places with low cost of living are generally, to put it diplomatically, unpleasant places to live.

            This will change for the better if more and more educated people relocate there.

            • nly6 hours ago
              And then those areas become more expensive...
          • SoftTalker7 hours ago
            But at least stereotyping happens everywhere!
            • selimthegrim5 hours ago
              I like how the assumption was they were all white, Christian and rural.
    • Rotundo6 hours ago
      Creating more software does not solve anything if that software is mostly a functional duplicate of other software. Or, in other words, all companies re-invent the wheel many times over. It doesn't matter if you 10x the development of software that brings nothing new besides being written in a shiny new framework.

      We should, IMHO, start getting rid of most software. Go back to basics: what do you need, make that better, make it complete. Finish a piece of software for once.

    • ryeights7 hours ago
      s/software engineer/secretary/

      s/creating software/typing correspondence/

      In a world where software programming/architecting is solved by AI, value will accrue to people with expertise in other domains (who have now been granted the power of 1000 expert developers), not the people whose skillsets have been made redundant by better, faster and cheaper AI tools.

      • ReptileMan7 hours ago
        It could go either way. Don't forget that LLMs also have expertise in the other domains. Who would do better - the chemist with vibe coded app or the developer with vibe coded chemistry?
        • ryeights6 hours ago
          My premise is that a vibe-coded app will be indistinguishable from a ‘hand-crafted’ one. So in that scenario the chemist wins, because the developer has no value to add.

          It is clear to me that SWE and ML research will be subsumed before other domains because labs are focusing their efforts there, in their quest to build self-improving systems.

    • 7 hours ago
      undefined
    • kypro7 hours ago
      There will be more software in the same way there is more agricultural output today.

      The idea that productivity gains which result in more of something being produced also create more demand for labour to produce that thing is more often wrong that true as far as I can tell. In fact, it's quite hard to point to any historical examples of this happening. In general labour demand significantly decreases when productivity significantly increases and typically people need to retrain.

    • ReptileMan7 hours ago
      Except we are now in the golden age where people with 20 or 30 years of experience know what quality software is - or at least what it is not. So they are able to steer the LLMs. Once this knowledge is gone - the quality could go downhill.
  • torben-friis8 hours ago
    Unless I'm missing something, there's an obvious logic issue here.

    If we truly need to sacrifice our skill to be productive by using LLMs that atrophy us, then the only devs that have a limited lifespan are us. The next ones won't have a skillset to atrophy since they won't have built it through manual work.

    Also, I hereby propose to publicly ban the "LLMs generating code are like compilers generating machine code" analogy, it's getting old to reargue the same idea time after time.

    • huani7 hours ago
      why is the LLM-compiler analogy flawed? Is it only because LLM output is non deterministic?
      • layer88 minutes ago
        You can reason with precision about how source code will behave once compiled, and how changes to the code will change the behavior. You can’t reason with precision about AI prompts in that way. This is about more than just determinism, because there are deterministic systems were you still can’t reason usefully about the input-output relationship.
      • torben-friis6 hours ago
        Besides probabilistic and non reproducible output, programming languages are designed to be unambiguous and explicit, and human language doesn't have that.

        For(){} it's normally either undefined or has a specific meaning. "Then iterate and do x" might mean many subtly different things.

        Most programmers never deal with a compiler bug in their whole career, and can dismiss the possibility. For LLMs it would be hard to even define what a "compiler bug" would be since there is no specification for English.

        Then there's the fact that models generally don't guarantee anything at all. Sonnet can change under your feet.

        Models also degrade as the context window gets larger. Compilers handle one line just the same as 20.

        I could keep going, there's so many fundamental differences in the process that the analogy only serves to provide a false feeling of security.

      • GrinningFool7 hours ago
        Because you don't have to coax, trick, or guide your compiler into doing the right thing.
        • mwigdahl7 hours ago
          Clearly you are not a C++ programmer. :)
          • marcosdumay5 hours ago
            Maybe C++ compilers would benefit from asking an LLM to rewrite their messages in a way that makes the point clearer...

            But the GP stands.

      • hyper_frog7 hours ago
        I don't think its just, there's also the fact that if you're working with c or cpp or any systems level language, you typically do know how to read assembly because you've stumbled upon it for some reason, and if you're writing low level programs (which is typically what these languages are used for) you will definitely at some point need to know to read assembly and maybe even write some. But with LLMs the entire field has shifted. You don't need to know anything to write any language and you don't even need to have high level computer science knowledge nowadays to get something that works and the world increasingly just seems to want something that works.
      • TremendousJudge5 hours ago
        I have mentioned it several times lately, but if the analogy was correct, people would be committing prompts and not code. High-level source code gets committed, binaries don't. If prompts were really "just a higher level of abstraction", then there wouldn't be a need for saving the code. Or at least you'd see people publish their prompts and chat history alongside the code.
      • bbg24017 hours ago
        Compiler: "Here is an exact program. Translate it while preserving its meaning."

        LLM code generation: "Here is an intent/specification. Invent code that hopefully satisfies it."

        Does the compiler analogy provide value under those terms? I don't think it does. In fact, I think it provides negative value.

        We don't need to use tortured analogies to express excitement over these tools.

  • JohnMakin5 hours ago
    > The career of a pro athlete has a maximum lifespan of around fifteen years. You have the opportunity to make a lot of money until around your mid-thirties,

    This sounds ageist - I'm around 40 and feel I am at my mental peak, compared to even my mid 20's. This isn't a good analogy at all, the brain doesn't "wear out" like a professional athletes' body does, it just changes its structure. The brain is a remarkable organ.

    • sibeliuss5 hours ago
      He just means: by this age you've probably found your preferred title and level, unless you want to rise to more C-level / executive positions, which are rarer in any case and most folks don't want.
      • JohnMakin5 hours ago
        This is definitely more charitable, but isn't this already the case now? It seems he's saying past your mid 30's you'd no longer be viable as a software engineer. That's never been the case, and I'm not sure why it would now suddenly be the case.
        • sibeliuss5 hours ago
          Even clearer, if you don't adapt to the changes taking place in the field there might not be a future for you. Its not about age, it is about attitude and flexibility (which are, admittedly, issues when getting older).

          In other words, if you want to continue stubbornly typing out code by hand, the person right over there has already mastered agentic tooling and is doing vastly more than you, more quickly, and with greater precision, and will simply be a more fit candidate to hire. Roles for this type of legacy stubborn personality will be less and less, and you will age out as part of the old school.

          • JohnMakin5 hours ago
            I see what you're getting at, but if it's not about age, why use an age related analogy? I probably should have amended my first statement in this thread is that it sounds ageist, if even implying that the people who will refuse to adapt will be older. This day is already here, people are already adapting to this. He seemed to frame it as the current young 20's career people will have this limited timeframe of productivity.
            • geodel2 hours ago
              Age analogy is fine because unlike few who are deep into technology and latest changes in the field (which btw are over represented on this site) for most IT developers age => experience without actually improving skills.

              As I interview lot of people for typical Enterprise IT jobs even at 20 years of experience they do seem to not know much beyond what they learned in first few years.

  • comonoid4 hours ago
    From Reddit:

    > After being laid off, a programmer becomes a welder. One day while working, he suddenly muttered to himself, "It's been so long, I've even forgotten how to solve three sum". A coworker next to him quietly replied, "Two pointers".

  • rglover18 minutes ago
    Screwdriver -> power drill

    Hand-coding -> llms/agents

    Sometimes the only thing that can fit into a tricky spot is a screwdriver. The power drill didn't make screwdrivers obsolete, it just made them less necessary day-to-day.

    Same thing here. LLMs are power tools, but sometimes, the only thing that can fit into a "tricky spot" with code/systems is knowing how to do it by hand.

  • woeirua7 hours ago
    Was it ever a lifetime career? Haven't most people looked around and asked themselves where are all the 50+ engineers? They basically don't exist in large numbers. Ageism is real in this industry. You either save up enough money to retire early, switch into management, or get forced out of the industry eventually. AI is just accelerating the trend. I see very few junior engineers resisting AI. I see a LOT of staff+ engineers resisting it. Just look at the comments on HN. Anti-AI sentiment is real.
    • LocalPCGuyan hour ago
      Every 5 years on average since the late 90s the industry has doubled in size. Add in natural attrition (and the other things you mentioned, ageism, management or other tech adjecent careers, etc.), and even accounting for a modest number of "second career devs" starting later in life rather than out of college, you still have an industry that skews younger simply by virtue of overall growth patterns.

      I think that is significantly overlooked when people ask "where are the 50+ engineers?".

    • mancerayder5 hours ago
      Managers are being slammed - FB, Amazon and recently Cloudflare and Coinbase.

      New grads are being slammed, "because LLMs can do that work."

      No new folks, no managers, and no olds. What a delightful career we've chosen for ourselves.

    • hnthrowaway03157 hours ago
      If you are lucky and got in early, then probably yes, it could be a lifetime career. It's like all careers, when you joined early, you got a lot of opportunities, you also rode the wave, you eventually rose to the top if you grit through.

      It's a lot easier to be early than to be smart or quick.

      • senko5 hours ago
        If you're on the top, you probably aren't coding much. So you're more in management than getting your hands dirty.
        • hnthrowaway03155 hours ago
          Yeah, but you still have the choice to stay in the trench. People like Carmack/Cutler do that. But I agree the majority just go high management.
    • HighGoldstein7 hours ago
      > Haven't most people looked around and asked themselves where are all the 50+ engineers? They basically don't exist in large numbers.

      I'm not discounting ageism in the industry, but how popular of a career was it 30+ years ago compared to now?

      • mikestew7 hours ago
        In 1996? Software development was the hot ticket to upper middle class in the early 80s when I was a recent CS grad, and I was already working with people who were in it for the money. By the late 90s, if you could spell “HTML”, you were making decent money as a web developer. This all came crashing down during the Dot Bomb collapse, but SW has been pretty popular for most of my career, and it just continued to get more popular, especially as salaries continued to increase.
        • hylaride4 hours ago
          I remember seeing an article around a decade ago about a ~50 year old "web developer" claiming age discrimination because they couldn't get a job. Somebody found their resume and it was literally 1990s "html/CSS" added to some other period tooling. Said person found a niche for a new technology (the web) and then stopped upping their skills.

          I've had to change course several times in my career (graduated in 2004). UNIX admin and later network admin, DevOps, and now I'm doing a mixture of DevOps and development (despite not being a full time developer in my entire career, being able to use AI to plug into code and fix/enhance things like monitoring, leveraging cloud APIs, etc has been a game changer for me).

          Right now, as somebody in their mid 40s, I'm seeing AI as a productivity amplifier. I am able to take my experience and steer and/or fight opus into doing what's needed and am able to recognize if it looks right.

          I'm so glad I'm not fresh out of school in this environment, though people said the same thing when I graduated in the Dotcom bust...but being ready and eager to do groundwork was a door opener. Finding that first door to open was tough, though.

        • SoftTalker7 hours ago
          In retrospect the Dot Bomb was a bump in the road. Yes, some people who only knew enough HTML to be a "Webmaster" might have been filtered out, but pretty quickly anyone who could really build software had opportunities greater than before.
    • ozim3 hours ago
      I think you missed the part where there were much much less software devs/engineers earlier.

      Year after year it was just much more new people joining as things got easier and more accessible.

      Now you see 40 or 50 year olds far and between where most guys I see are in their 30s. Ones that are 60 yo diluted in the sea of new entrants.

      Ageism didn't came from the top it just happened with flood of young employees, there is just social dynamics where you might get 40yo not being a manager getting along with bunch of 25yolds but that's going to be an exception not the rule.

  • maerF0x040 minutes ago
    Ageism is alive and well in our industry.

    People need to learn the difference between fluid intelligence and crystalized intelligence.

    People need to hear that startup success is maximal when the founders are older, not younger. VCs chasing youth are statistics deniers.

  • simonw3 hours ago
    This is such a misleading title. The post isn't about software engineering not being a lifetime career, it's about this:

    > If AI does turn out to make you dumber, why can’t we just keep writing code by hand? You can! You just might not be able to earn a salary doing so, for the same reason that there aren’t many jobs out there for carpenters who refuse to use power tools.

    The argument the piece makes is that being a software engineer who insists on writing code by hand may no longer be a lifetime career.

    I think the definition of "software engineer" is changing, and it's not even changing that much. We construct software to help solve human problems. We can keep on doing that, just now we get to do it more.

  • strken7 hours ago
    Argument A: AI means you don't learn as much, so even though you are more effective, it inhibits your growth and you shouldn't use it. However, on a pragmatic level, it's effective to hire a bajillion people, fire them at will, and get AI to do everything. You will get so many JIRA tickets closed and so many lines of code written.

    Argument B: AI means you don't learn as much, and the single most useful work product of a software engineer is knowing how the code functions, so it's depriving your company of the main benefit of your work. Also, layoffs are terrible business strategy because every lost employee is years of knowledge walking out the door, every new hire is a risk, and red PRs are derisking the business.

    Institutional and personal knowledge seem similar, but the implications of each are radically different.

  • pugworthy6 hours ago
    I'm repeating what others have essentially said, but ask yourself what's on your resume. If it says "Software Engineer" and that's all it talks about, then yea you might not find it's a lifetime career.

    But if it's a diversity of things (that use or leverage software development) then you probably have a lifetime career ahead of you.

    I've been writing software for over 40 years but I've never seen myself as having a software engineering career. I've been a research assistant in geophysics, a marine technician on research ships, a game developer, an advisor to the UN, and a lot more. Yes all through that I used software, but I did a lot of other things in the process of using it.

  • dakiol4 hours ago
    We're forgetting one thing: we (mere engineers) have control over nothing. The vast majority of us are at the mercy of executives and investors. Before AI we had some sort of grip because our skills weren't so much a commodity, and yeah, dealing with code and systems architecture and data and distributed systems wasn't that easy. Now AI is a tool not for us but for the higher-ups, they can finally commoditize software engineering and need only a small fraction of us. I see engineers around here fighting and discussing who'll be left behind (the 80%) and who'll remain because they're "more than mere coders" (the 20%)... what we don't discuss here is that we're all now at the mercy of Anthropic et al, and that's bad. The irony is that the vast majority of us use Anthropic, so we are just loading the guns for them to use them. It's sad, but we call it progress. Nuts
  • mbgerring3 hours ago
    I don’t understand why so many people are convinced that “this time is different.” New tools raise the ceiling of what’s possible. New jobs emerge at the limit of what’s possible with new tools. The jobs doing what we do today will disappear. New jobs with greater complexity and specialization will emerge. I have watched this happen in the software industry in my lifetime. I expect that it will continue to happen.
    • jhrmnn2 hours ago
      This can’t be sustainable, there must be a limit in human biology as to how complex jobs we can handle. More and more people will fall under that threshold.
    • borzi2 hours ago
      It's not different. If you haven't already, read "Extraordinary Popular Delusions and The Madness of Crowds".
  • jongjong5 minutes ago
    Engineering boils down to figuring out what is important and prioritizing.

    This requires having an understanding of a business domain, economics, human psychology and technology.

    The competitive aspect of it means that you need to understand these things better than most people and machines. If you don't, then your skills have no value on the market. Will generalist AI trained on public data ever understand these things better than software engineers across every possible niche?

    I don't think so because that knowledge is usually gate-kept. Nowadays, new engineers almost have to beg to be given access to knowledge of company systems. It takes at least 6 month for a skilled engineer to ramp up on large systems... And it's mostly because of institutional resistance.

    The thing is, it doesn't even require people to be withholding information... Some engineers will happily share everything they know about internal systems... But in a big company; you first have to identify this person. That can take a while... Then you need to identify other persons who will give you other information that is relevant to your specific tasks/integrations.

  • adrianmonk3 hours ago
    Here's a better comparison to pro athletes. Their work output is winning games. How do they get good at (and stay good at) that? Is it by playing real games for points?

    That's a part of it, but only a small part. They don't get good at the thing mainly by doing the thing. They get good at it by training to do the thing.

    An NFL football player does a ton of things other than playing in games. They have practice scrimmages. They do drills like throwing, catching, running patterns, tackling, reading quarterbacks, stripping balls, picking up fumbles, etc. They work with coaches on their technique. They watch film. They spend many hours in the gym and on the track building their strength, speed, cardio, and stamina.

    Yes, it's true that your software skills will atrophy if you don't use them. But that doesn't mean your skills have to get worse and worse causing you to eventually quit the job. It means you need to set aside time to maintain your skills. It may no longer happen automatically as a side effect of your work, but it can happen intentionally instead.

  • afavour7 hours ago
    Seems the solution here is the same it's actually always been if you want career progression: be more than just a code jockey. The true value of an engineer is to be plugged into overall roadmaps, broader thinking around product, how to achieve company goals, etc etc.

    Yes, LLMs might dramatically reduce the amount of code we write by hand. But I'm a lot less convinced they'll solve all of the amorphous, human-interacting aspects of the job.

    • harimau7777 hours ago
      My exprience has been that companies actively work to prevent people from becoming more than just code jockeys. For example, most of the places I've worked have viewed code delivered as the ONLY metric used to evaluate performance. Attempts to contribute to roadmaps or strategy are ignored at best and punished at worst.
      • randcraw5 hours ago
        Yeah, 95% of the available advancement in computing is in people management, not technical mastery. Businesses much prefer to hire externally to serve any non-core capabilities, especially to minimize internal culpability should anything go wrong. That leaves little opportunity to think outside the box technically.
  • raffael_de8 hours ago
    The differentiator is augmenting reasoning with AI versus replacing reasoning with AI. But those who choose to replace their reasoning with AI probably weren't good at it to begin with; cause if they were, they'd choose to not replace it. Exception is that AI can actually replace reasoning (which it can't, yet) - then it's game over with a career in software engineering anyway.
  • wduquette3 hours ago
    "You can! You just might not be able to earn a salary doing so, for the same reason that there aren’t many jobs out there for carpenters who refuse to use power tools."

    I've long regarded myself as more a master craftsman than an engineer, and I've had the pleasure of working on one-of-a-kind or first-of-a-kind things. Perhaps fortunately I'm near retirement. But I genuinely enjoy the coding: it's how I engage with the problem and learn to understand it. It's also how I ensure that I'll be able to read the code and find things in the code base when I come back to it years later. Last thing I want to do is spend my days overseeing someone (or something) else's code. If I wanted to be a manager of programmers I could have done that years ago.

  • philipnee7 hours ago
    80% of my day to day job has never been pumping out lots of code. it is a complicated career is it? we do a lot of alignment, design and thinking. i can't even agree the idea of outsourcing thinking, i think AI is very good at helping us to think clearly, but it doesn't really "think" for us.

    if you do that then... likely very replacable.

    • azan_7 hours ago
      If AI becomes good enough to easily produce maintainable and high quality software, then I really can't see how demand for software engineers would not plummet. Even lots of non-coding work that software engineers do, such as accurately capturing what client actually wants, will become much less valuable - e.g. currently misunderstanding of client's requirements is catastrophic and can lead to waste of months of labour; with AI it could become matter of max few hours lost. So I can understand argument that software engineering careers might be safe because AI may plateau and we might never reach level where it's actually capable of producing good software. But I absolutely don't buy that software engineering will be safe even if such AI exists. Even if your current work is just 20% actually coding, you must remember about second order effects that will take place once quality code generation is 1000 times faster.
    • hnthrowaway03157 hours ago
      AI can also do alignment and pull from its vast training dataset for design and "thinking" -- because 99% of the problems in this world were already solved, multiple times, maybe not in the exactly same format, but in a very similar format.

      I also see that in the future humans will adapt to AI, instead of the opposite. Why? Because it's a lot easier for humans to adapt to AI, than the opposite. It's already happening -- why do companies ask their employees to write complete documentation for AI to consume? This is what I called "Adaption".

      I can also imagine that in the near future, when employment plummets, when basic income become general, when governments build massive condos for social housing -- everything new will be required to adapt to AI. The roads, the buildings, everything physical is going to be built with ease-of-navigation by AI in consideration. We don't need a Gen AI -- that is too expensive and too long term for the Capitalist class to consider. We only need a bunch of AI agents and robots coordinated in an environment that is friendly to them.

      • randcraw5 hours ago
        Rather than coining a new word like adaption, I'd call this acculturation. It's reshaping not only SW dev but natural language too -- how we read and write and how we speak.

        Everyone knows that AI-written slop isn't worth actually reading. So when reading mass media content we skim over each paragraph's opening phrases rather than read it deliberately, sentence by sentence. We also do this while writing notes, dropping determiners, acronymming common phrases, and making references to characters/scenes in popular media. Now with the rise of vocal interfaces and ever shorter rounds of engagement, all this abbreviating will only exponentiate.

  • AbbeFaria6 hours ago
    There’s a hierarchy amongst knowledge work and AI hasn’t yet been able to do the work that is rare and valuable.

    Over the past two decades, there have been lot of solved problems like building boring scalable web apps, UX design etc and AI is fairly good at this, enough so that good prompting can get you very far. This shouldn’t be a surprise, there’s a lot of publicly available data for this (GitHub repos etc).

    On the other hand, there are rarer Computer science problems like designing efficient Datacenters, GPUs, DL models. Think about problems that someone of Jeff Dean’s or James Hamilton’s (AWS SVP) or a skilled Computer Architecture researcher like David Paterson’s ability would solve. These are incredibly hard and rare problems and AI hasn’t been able to make much progress in these areas. That’s true for other sciences as well.

    If you’re a regular Joe like me who builds boring CRUD apps, AI is coming for you.

    What I mean is if you are working on incredibly hard and rare problems that require rare skills and also those problems don’t have publicly available data that LLMs can be trained on, you’re safe from being “automated” away. If not, you must plan accordingly. Also if you’re a skilled manager (in any field) AI cannot replace you, highly skilled managers that can get the best out of their teams have rare skills that aren’t easily replicable even amongst humans much less AI. Although, if going forward we need fewer developers we will need fewer managers too.

  • xrd3 hours ago

      "We may be in the first generation of software engineers in the same position. If so, it’s probably a good idea to plan accordingly."
    
    He compares software engineers to pro athletes. What does it mean to plan accordingly? Start working with the mob to fix poker games? I don't know what "plan accordingly" means at all but it is a thought provoking statement.
  • 6 hours ago
    undefined
  • osigurdson2 hours ago
    It won't be a career if AI gets good enough that you don't have to read / understand the code - otherwise, AI won't have much impact on jobs I don't think.
  • cmiles745 hours ago
    Comparing software development to carrying heavy things at a construction site feels like a real stretch to me.

    'If you work in construction, you need to lift and carry a series of heavy objects in order to be effective. But lifting heavy objects puts long-term wear on your back and joints, making you less effective over time. Construction workers don’t say that being a good construction worker means not lifting heavy objects. They say “too bad, that’s the job”.'

    On another note, for sure software developers are saying things like "this is the part of the job that I like" or "if you aren't doing the work, you won't be good at the work." But other people are saying this, too. I just saw an episode of "Hacks" ("QuickScribbl") where the writers say pretty much this exact thing when confronted with AI tooling designed to "make their job easier". Is writing comedy also like lifting heavy objects at a construction site?

  • oytis5 hours ago
    I don't understand it. The time-limited career would work if we were born with innate ability for software engineering and would lose it over time by using AI. Most people are not born with that ability though, it needs to be developed first.

    And read Programming as Theory Building already, it's not that long

  • hoppp3 hours ago
    If you don't know how to code then you can't really influence the AI technically and that can result in everything being the same.

    Maybe you want a react app and using redux for state would be the best for the specific case but the AI doesn't recommend it and you don't know, then you are missing out and can end up with something suboptimal This was just an example

  • AllanSavageDev3 hours ago
    From all I can see, things for US citizen developers are all but over.

    Not AI, offshoring combined with downsizing of US based engineering orgs.

    Corp America has figured it out finally after 2 decades of entitled developers making 2 day tasks into 2 week tasks in the name of "best practices", "architecture" and "Doing It Right!" etc, all while commanding high salaries.

    It turns out that Good Enough is in fact good enough and the people who write the checks are onto it. Even if its not quite good enough, cheap offshore resources can just be sent back to make it work. US based staff of 5 people who can be held responsible for guiding a much larger offshore group seems to be the common pattern.

    All of this was imparted to me by a CIO on a recent interview with a financially strong mid sized company in the eastern US. The developers I interviewed with where EXCEPTIONALLY COMFORTABLE and displayed zero signs of any kind of stress from maintaining their literally 20 years out of date infra. It was insinuated that the team I interviewed with "probably wont look the same in 6 months" too.

  • EliRivers3 hours ago
    The majority of my activity today as a professional software engineer with two decades' of experience; trying to get Team Sales to express what they want. It's so hard. I see no way an LLM can do this. I could possibly be replaced with someone who spent their time begging Team Sales to type what they want into an LLM.
  • 7 hours ago
    undefined
  • dzonga3 hours ago
    such an incoherent argument.

    > professional athletes & construction workers - work in physical fields which means there's physical limits to what they experience both in terms of what they do & what their body can do.

    > software engineering is an art & engineering. which means as long you're of sound mind - you can do it till you die of old age or even if say you go blind. Because you ability to refine / taste is not dependent on your physical capabilities.

    > llm's one shoting things - is not engineering because engineering is about compromising within constraints & using rules of thumb. so if you have no constraints u r not engineering.

  • somesortofthing3 hours ago
    Agent-assisted programming is fundamentally the skill of directing and supervising agents. I don't see any reason to believe that working a job where you direct and supervise agents will make you any worse at directing and supervising agents long term.
  • diebillionaires2 hours ago
    Have you seen construction workers lately? They have machines for everything. I highly doubt they even need to lift things any more.
  • tehnub6 hours ago
    >If the models are good enough, you will simply get outcompeted by engineers willing to trade their long-term cognitive ability for a short-term lucrative career

    > (2) AI-users thus become less effective engineers over time, as their technical skills atrophy

    Wouldn't (2) imply that if everyone just used AI there eventually would come a time when there aren't engineers who will outcompete you (because their skills are so atrophied)?

  • topherPedersen3 hours ago
    It's been absolutely astonishing to see software developers pick software development as the first profession to attempt to automate away. Couldn't you geniuses have picked any other profession to start with? And it's not just the developers at Anthropic & OpenAI, even at my own company, the rockstar developers were the first to try and automate away all of the software development jobs at our company.
    • simonw3 hours ago
      I've been releasing open source software for ~25 years at this point. The goal of that was always to save other developers time, as part of a collaboration where other open source developers save me time with their own work.

      That's worked out pretty great so far!

    • Jtarii3 hours ago
      I don't think it's that supprising that software developers would try and make tools to make software development easier.
  • mobiuscog6 hours ago
    Software engineering today is almost nothing like the role it was 30 years ago.

    Maybe if you somehow stick with the same company for your entire career, it could feel somewhat similar... but I doubt it, as 'best practices' and many other things cause it to change.

    The days of 'lifetime career' had already gone for most people, way before AI arrived.

  • abhik245 hours ago
    Totally agreed and on point. Calculator operators aren't around a lot.
  • Stevvo5 hours ago
    I take issue with the premise that "Using AI means you don’t learn as much from your work" With AI assistance, I tackle far more tasks than I would without it. Learning per task goes down, but cumulative learning does not.
  • gcanyon7 hours ago
    Is anything today a lifetime career? I’ve had at least five or six job descriptions over my time, and at least a few of them pretty much don’t exist anymore, or are changed beyond recognition.
  • coldtea5 hours ago
    >I don’t think there’s compelling evidence that using AI makes you less intelligent overall1.

    That statement is enough of an evidence

  • 4 hours ago
    undefined
  • boombapoom3 hours ago
    it never was. Look around, its a young mans game from the get go
  • feverzsj5 hours ago
    So, permeant underclasses those billionaires talking about are actually just juniors that never get a chance to become a senior.
  • mariopt7 hours ago
    Why are we upvoting this?

    Virtually, the entire blog is about AI with a ridiculous publishing rate (https://www.seangoedecke.com/page/5), funny how I can look at this site HTML and know right away it was done with AI.

    Can we stop upvoting vibe published articles? The arguments are flawed and don't even make sense to anyone who does software

    • passivepinetree6 hours ago
      I think you're being a bit harsh here.

      Yes, the blog is mostly about AI, and yes, he publishes very frequently. But his articles don't read like AI and he claims not to have used it in his writing (https://www.seangoedecke.com/avoid-ai-writing/). And regardless of how you feel about the content, the community has clearly decided it's worthwhile as a discussion point.

    • aleph_minus_one7 hours ago
      > Why are we upvoting this?

      Because people want to discuss about the topic of the headline.

  • dusted5 hours ago
    I'm a software engineer and architect, I love my job, I love diving into the small details, I love the grand overview.. I love identifying concepts and applying them to achieve elegant high-performing systems.

    I love thinking about what kind of assembler the compiler may generate (though honestly, I haven't got a chance), I love thinking about how languages should be more dynamic (Who's got actually-first-class functions? Like, ones that you can build, compose, combine and manipulate to the same degree you can a string or a JSON object, no LISP, you're cheating, close no point).

    And yet.. I don't care that much. Not because I'm late in my career (I'm 40, there's still some years left in me), but because I want to make computers do things, and what I enjoy doing is thinking up ways the things can happen, and sometimes the particulars that matter when making a lot of different things happen in a coherent system.. And yea, LLMs are trained on peoples output, and from what I'm seeing everywhere, is that people are overall fairly terrible at that, and most of the plumbing-type glue being written is not worth anyones time..

    And I'm not saying I don't care because LLMs can't do my job (heck, even after hours of back-and-forth spec building and refining every little nook and cranny, the stupid coding agent still cheats or gets it wrong (even after it's beautifully explained, proven even, by reasoning and example alone, and on first try even) that the words coming after the previous words makes sense, as soon as the plan is put into motion, it'll mess it up on some scale so fundamental I should just have done it myself.. And I hope that changes, I hope that I don't have to go into such detail.. I hope to become a steward of taste rather than a code-reviewer.. I hope that I will eventually not be needed for that anymore.. I want it to replace me, so I can move to telling what I want, and have it made that way..

    I hope I won't need to steward good taste, and that nobody will.. I hope the applications I use in 5 years will be a collection of one-offs, and gradually improving tools that was written _just_ for me, for my way of working, and my way of thinking.. I want to prompt the damn program to change itself as I discover new ways to do things, until it can eventually figure out how to automate the last bit of my task away.. And then I'll go do something else exciting.

  • 3 hours ago
    undefined
  • yieldcrv3 hours ago
    Month 30 of software engineering disappearing in the next 6 months

    I'm greatly anticipating the next Great Leap Forward™ with a publicly available Mythos or other new paradigm I can't currently imagine

    but at the moment, agentic coding has made me busier than ever before, while its Product Managers, UX, QA, Data Scientists and DevOps that have disappeared from the teams I'm on - across multiple organizations - and I have to do all their work and make dashboards that I didn't have to make as well

    All the projects that would have been cancelled by Q3 are being attempted in Q1, means more work

  • mystraline4 hours ago
    It'll move, sure.

    Im looking at proper engineering in building local LLM networks, with proper firewalls, capability access, and guards around the LLM systems to allow and enable advanced use while not just "lol delete everything" happening.

    When theres a land grab, move to selling tools and how to knowledge work in maintaining the tools and proper operation and maintenance.

    I also look at upsells like local LLMs as reason to do this in house, so that companies arent liable for rug pulls and violation (consumption) of trade secrets, or breaching confidential discussions.

    And LLMs arent good at recommending tech stacks for running them. Stuff is moving faster than most data training sets have.

  • pvelagal5 hours ago
    Imagine a situation where AI creates thousands of lines of code in a few repos and there is a Production issue and does get resolved by AI. How can humans jump in and resolve the bug without knowing anything about the code ?
  • anarticle4 hours ago
    Software is a tool to solve a problem, as long as you keep finding problems that you can solve with it, you're likely to get paid to do it.

    If your crowning achievement is: "I can 100% all leetcode hards" I have bad news for you.

  • varispeed4 hours ago
    Software engineering was not a career long time ago. The companies have no respect for software engineers and treat them as commodity that can be replaced at any time. The traditional career "progression" also doesn't exist. You can get pay rise only so many times and become the seniorest senior or you want to fulfil the Peter's principle.

    While most developers were busy grinding, the corporations did the most ensuring the only sensible pathway to wealth and development is closed = running own business that is. In many countries, due to regulatory capture enacted by corrupt governments, making profit is next to impossible, that if you manage to jump bureaucratic hurdles that are not present for larger corporations.

    AI is just a tool. Will AI replace software engineer is like asking will hammer replace the carpenter?

  • traderj0e5 hours ago
    Nah it is
  • delusional7 hours ago
    > Construction workers don’t say that being a good construction worker means not lifting heavy objects. They say “too bad, that’s the job”

    I dont know, maybe in your part of the world, but where I'm from we have a series of robust worker protection laws that try to limit the damage the work does to you. We generally consider it a bad thing for workers to damage their bodies, and if we could build houses without it, we'd prefer that.

    In this specific case we do have a techniques to build software without causing damage, so why change that?

    This post is arguing that maybe software enginnering should start being harmful, even though we know it doesn't have to. It's a post of a guy begging to be fed into the capitalist meat grinder. Meaningless self sacrifice.

  • dailywriterguy3 hours ago
    I mean according to our tech overlords, no one will need to do anything and we'll just sit around and goof off all day. so, honestly, future is bright.
  • coolThingsFirst7 hours ago
    It never was a lifetime career, if you don't get the dough by 35 you just failed.
    • SoftTalker6 hours ago
      Absolutely untrue, you could have a solid career writing back office or internal software in financial services, insurance, higher ed, any number of industries. Would they make you a millionaire? No. But they'd pay for a nice house in the suburbs and raising a family.
    • mythrwy5 hours ago
      I started at age 39 though and did pretty well up until a year or two ago (16 years total).

      Like many people I've been sad about the loss of a career I spent years developing skills in and I'm 55 now and won't be quickly retraining for another high paying career. Fortunately I do have other skills I developed earlier in life and low needs so will probably limp by fine but it's still a painful adjustment.

      Point being, you could always write code as an older person. Well, back in the old days when we wrote code anyway.

    • whateveracct5 hours ago
      i'm about 35 and i have made good money but not enough to quit. i plan on just sitting around cashing checks for another decade. with a few liquidity events along the way to sweeten the deal. should pay for my mortgage, some home renos, and fund my 401k etc. i don't foresee myself being out of work (and i don't even use AI to code! i'm just Actually Good!)
  • keybored7 hours ago
    > I hope that this isn’t true. It would be really unfortunate for software engineers. But it would be even more unfortunate if it were true and we refused to acknowledge it.

    More AI Soothsaying. Not so hard on the Inevitabilism this time.

    https://news.ycombinator.com/item?id=47362178

  • pphysch7 hours ago
    On the contrary, in an efficient economy, every business operations manager (MBA) would be a skilled software engineer, able to comfortably manage data flows and design custom automated processes. There's so much potential energy there in unlocking this technical literacy.

    Less "pure" programming, but lots more programming in general.

  • the_real_cher8 hours ago
    Was it ever? It's always seemed weird to me that people even think 'software engineering' is a career.

    It's a tool for knowledge work.

    No carpenter is a specialist in drills.

    It seems to me that the best way to navigate a long term career is to have another specialty and use software engineering as a tool within that specialty.

    • Aurornis7 hours ago
      It most certainly was a lifelong career.

      I’m kind of confused how you might think it wasn’t. Going through a career as a software dev until retirement was very common.

      Software engineers didn’t just disappear after age 40.

      • aleph_minus_one7 hours ago
        > Software engineers didn’t just disappear after age 40.

        At the end of the 90th and beginning of the 00th ("dotcom bubble") it was a common saying that if as a programmer, when you are 30 or 40, you don't have a very successful company (and thus basically set for life), you basically failed in life; exactly because "everybody" knew that programming is a "young man's game" (i.e. you likely won't get a programming job anymore when you are, say, 35 or 40 years old).

        So,

        > Software engineers didn’t just disappear after age 40.

        is rather a very recent phenomenon.

        • atmavatar6 hours ago
          Enter the carousel. This is the time of renewal.
          • selimthegrim5 hours ago
            I'm not sure anyone under 40 is getting that reference.
        • GrinningFool7 hours ago
          > At the end of the 90th and beginning of the 00th ("dotcom bubble") it was a common saying that if as a programmer, when you are 30 or 40, you don't have a very successful company (and thus basically set for life), you basically failed in life; exactly because "everybody" knew that programming is a "young man's game"

          That seemed commonly held among folks participating in the dot-com bubble. Plenty of people had been doing it for decades even as the bubble was growing.

          > Software engineers didn’t just disappear after age 40.

          >> is rather a very recent phenomenon.

          Not really. It's not that they disappeared, it's that they're a small fraction of the overall SWE population as a side-effect of how much that population has grown.

        • Aurornis4 hours ago
          > At the end of the 90th and beginning of the 00th ("dotcom bubble") it was a common saying that if as a programmer, when you are 30 or 40, you don't have a very successful company (and thus basically set for life), you basically failed in life;

          This wasn't common anywhere except for maybe the Silicon Valley bubble.

          The rest of the US and even the world could see that not having a very successful company of your own is to equal to being a failure.

          • aleph_minus_onean hour ago
            > > At the end of the 90th and beginning of the 00th ("dotcom bubble") it was a common saying that if as a programmer, when you are 30 or 40, you don't have a very successful company (and thus basically set for life), you basically failed in life;

            > This wasn't common anywhere except for maybe the Silicon Valley bubble.

            This was a very common sentiment even in Germany at this time.

    • strken7 hours ago
      Software is wood, not drills, and if we somehow invented bacteria that gradually built an ugly but saleable house when fed on water and nutrients and nudged into shape, I bet carpenters (well, framers or whatever they're called in the US) would have an identity crisis too.
    • jdc05897 hours ago
      I kind of disagree. You are describing a kind person who is extremely valuable, a person who is proficient in SWE but also has domain specific skills in some niche.

      That's great, but its nowhere near the norm, and people have been doing generalist software engineering for decades. There has been a sufficient amount of work for a long time to be performed by generalists that it has been a very reasonable career.

      IMO AI is the first thing that has ever actually challenged that.

    • azath927 hours ago
      Id disagree with this analogy: "No carpenter is a specialist in drills." and i think its an interesting lens through which to look at the evolution of our tools.

      I think there are trades where tool (or process if i may be allowed to extend the analogy) specialists exist and are highly valued. My dad is a plumber, so ill use that example but id trust similar is true for carpentry. there are specialists by task/output (new construction, repairs, boilers etc) but also tool specialist plumbers and companies for example drain clearing equipment or certain kinds of pipe for handling chemicals other than water are very specialised, and there are roles for them because the thing they enable, and the criticality of the task, and often the cost and complexity of using the tool are high enough to make specialisation valuable.

      IMO software has, for the 10 years ive been working in it, been in an unusual position where the tools (languages, engineering practices, tech stacks) were super technical and involved, but also could be applied to a large number of problems. That is the perfect recipe for tool specialists: complex tool with high value and broad domain/problem space applicability.

      Because of that tool specialisation, we've separated the application of the tool to a problem/domain from the tool use. reduction of complexity of applying these tools to many problems, means all domain specialists will use them, relying less on tool specialists.

      imaging a mcguffin tool for attaching any two materials together, but which took a degree to figure out (loose hyperbole here), that sudenly you could use for 5 bucks and a quick glance at the first page of the manual. An industry that used to have lots of mcguffin engineers, would be mega disrupted, and you could argue that those tool specialists would have to identify more with what they were building than the mcguffin they were using.

      • randcraw4 hours ago
        Yeah, IEEE Spectrum has responded to the dissimilar roles in SW dev by ranking programming language popularity contextually, by separating the project domains and ranking the languages only within each domain. That's a lot more useful than allowing the single dominant project domain to silence the recessive ones, as TIOBE does.
    • jake-coworker7 hours ago
      I think the logical next step is that "XYZ knowledge worker" will become a software engineer of sorts. Not literally writing code, but at minimum encoding processes/workflows into some language.

      If you're a paralegal or an accountant who can't manage their workflows with AI, you're going to be way less productive than someone who can.

      And if you're a paralegal or an accountant who can manage a lot of your workflows with AI, you don't need custom software (hence less dedicated software engineers).

    • chasd007 hours ago
      I tell my boys (both in HS now), the combination of a specialized skill/knowledge + competent computer programming is the sweet spot. For example, my oldest wants to go into Petroleum Engineering which is great but I told him to still learn software development and get comfortable solving problems with code. Having specialized Petroleum Engineering knowledge combined with being a competent software developer is a powerful combination.
      • randcraw4 hours ago
        Yeah, I've seen the same thing happen to data miners in the pharma industry. An increasing fraction of young biologists have skill in basic statistical DM as well as web search proficiency sufficient to gather DM code analysis examples, even without using AI. In the very near future I expect almost all R&D exploratory DM will be done by pharma domain experts (biologists and chemists) rather than served by DM experts (computer scientists or engineers).
    • delusional8 hours ago
      > No carpenter is a specialist in drills.

      There's no category difference between being an expert in carpentry vs masonry and being an expert in drills vs hammers. They are both just areas of expertise.

      Going down the path of trying to define what is expert functions and what is "merely" a tool using anything but descriptive technique is nonsense.

      Expert functions are just those areas where using a tool is sufficiently difficult to require expertise.

    • suddenlybananas8 hours ago
      Software engineering isn't a tool, it's the task.
  • vasco8 hours ago
    Are people seriously thinking that you can make yourself dumber by using a chat UI?

    If talking to an AI makes me dumber and a limited career, then all the customer support people that ever existed were in the same or worse position talking to dumb humans on chat all day answering tickets always about the same topics and linking the same docs over and over. This makes no sense.

    • meheleventyone8 hours ago
      You're misrepresenting the potential problem. It's more along the lines of using AI stops you exercising the cognitive processes you would doing things yourself and those encompass skills, knowledge and brain function that can atrophy. For an extreme example you can look at cognitive decline in the elderly which can be mitigated by taking part in activities that are cognitively stimulating.
      • vasco8 hours ago
        Can you comment on other jobs though? The large majority of jobs require no big mental effort? Even switching from programming to management would go through that. Under that light it'd be impossible for a manager to ever become technical again because they'd atrophy so quickly?
        • meheleventyone7 hours ago
          I think you're probably castrophizing the impact with statements like "it'd be impossible for a manager to ever become technical again" because that's not the likely outcome as I understand things. But yes people who stop programming for an appreciable amount of time do find it harder to pick back up again.
        • somebehemoth7 hours ago
          The longer the manager is out of the game, the harder it is to return to the game. Returning to the game takes time. Depending on age and income, returning to the game may be impossible for some people over time.
        • delusional7 hours ago
          I can't answer for the other guy, but my answer would be that talking to a clanker is LESS mental effort than being a manager, and that's why your reasoning atrophies so quickly.

          Managers can go back to being technical, because they are still interacting with problems that require human thinking. Token farmers don't.

    • shhsshs7 hours ago
      If you constantly pawn a task or cognitive load onto someone else (AI or not), you'll eventually get worse and worse at that particular type of thinking. Your overall mind doesn't necessarily get weaker, but you definitely start to get worse at anything you don't regularly practice.
    • nathanielks8 hours ago
      I think you need to read the studies linked in the footnotes. This is a well-studied issued.
    • ramon1568 hours ago
      You can definitely feel it when you talk against an AI vs doing the churn yourself. It's comfortable, simple, it doesn't aggravate you.
    • 4ndrewl8 hours ago
      Pretty much every study says so, so I guess?
  • tayo428 hours ago
    > The career of a pro athlete has a maximum lifespan of around fifteen years. You have the opportunity to make a lot of money until around your mid-thirties, at which point your body just can’t keep up with it.

    If you believe this about your software career, how do you think your going to switch into another career as a junior and keep up?

    • chomp8 hours ago
      Easy, I made the switch in my 30s, now I manage software engineers :)
      • hnuser5 hours ago
        Software managers are being replaced by vibe coders.In the age of AI managers are irrelevant
    • slowgramming8 hours ago
      [dead]
  • j454 hours ago
    Most careers evolve as technology does.

    Other professions do too, whether it's healthcare, etc.

    Software being a new field, didn't really become a standardized profession in the way engineering might be.

    The goalposts are moving because the standards are moving, because the capabilities are moving.

    Remaining a self-directed learner will remain critical.

  • yobid207 hours ago
    terribly written article that failed to make any point. anyone whise read ai generated code from the best models and who understand how llms work, knows this statement is complete bs.
  • otabdeveloper48 hours ago
    It will be for those fixing AI slop software. (In fact, they might need several lifetimes.)
    • pllbnk7 hours ago
      The problem partially is that AI can also fix AI slop. At this point I am in doubt whether code quality matters anymore in most non-critical software. You can ask an LLM if the code has quality issues and refactor to a _better_ version. It will reason through, prepare a plan and refactor. So now with this "better" code you can expect that your LLM will be able to deliver higher quality results but that's all the quality that is needed.

      Actually, at this point I feel that the value in software engineering is moving from coding to testing and quality assurance.

      • ezekg7 hours ago
        In my experience, an LLM "refactoring" autonomously doesn't actually improve code quality, it simply reorganizes the mess into a new mess.
        • missedthecue5 hours ago
          This is my experience with human developers too so I'm not sure if there's a meaningful difference.
      • bcrosby957 hours ago
        Sure, but also, AI will always find issues. It will never be mildly satisfied with the codebase and say so.
        • missedthecue5 hours ago
          All the frontier models tell me when there are no issues. After implementing a feature I will ask it to identify issues in my implementation, list them, and support each item they identified with technical argumentation and reasoning as to why it's an issue.

          If it doesn't find anything it says I didn't find anything.

        • pllbnk7 hours ago
          Not from my experience. It's true that it will always find new issues in a new session but it is happy to say so when the code is good.
      • otabdeveloper45 hours ago
        > AI can also fix AI slop

        No it can't.

        AI knows nothing about software engineering, all it can do is generate code.

        • platevoltage3 hours ago
          I'm currently being paid by a client to fix his AI slop.
    • incognito1248 hours ago
      Why do people think there will be fixing AI slop software? I see that opinion here and there on HN. The cost of codegen is next to nothing. It makes no sense to spend large sums of money having an engineer fix something that could be generated over and over until gods of stochasticity come in your favour.

      We've entered a period of single-use-plastic software, piling up and polluting everything, because it's cheaper than the alternative

      • GrinningFool7 hours ago
        When everything is generated on-demand - each exploit has to be discovered anew. No more conveniences like common libraries.

        This is sarcasm, but it's probably also going to get sold as a feature at some point.

      • camdenreslink3 hours ago
        If the AI slop software managed to get a user base, then you can't just throw it away and completely start over. You need to modify it in a way that is seamless for your users. If all code becomes single use, are users generating it for themselves? Do you think a dentist office will vibe code their own scheduling software?
  • donbventuresan hour ago
    [dead]
  • alvatech4 hours ago
    [flagged]
  • lacymorrow5 hours ago
    [flagged]
  • kitbot5 hours ago
    [flagged]
  • player12344 hours ago
    [dead]