43 pointsby nr3787 hours ago16 comments
  • robotresearcher5 hours ago
    > Assembly programmers became C programmers became Python programmers. The abstraction rose, individual productivity increased, more total software got written, and roughly similar numbers of people were employed writing it.

    The number of programmers has changed so much, from ~zero in the 1940s to tens of thousands in the sixties, to, what, maybe 30 million today? While most programmers worked a little or a lot in ASM from invention until the 1980s, it's a very specialized role today.

    I do not believe that 'roughly similar numbers of people were employed writing' ASM, C, and Python except for the instant that C outpaced ASM in the seventies and when Python outpaced ASM somewhere around the millennium.

    Probably at no time were ASM, C, and Python programmers even close to similarly numerous.

    • Terr_5 hours ago
      I often like to point out that the growth can trick us into over-estimating the attrition/ageism of the industry.

      Sure, you barely meet any 40-year veterans... but that's partly because 40 years ago there were barely any 0-year newbies to start with.

      • drivebyhooting5 hours ago
        In theory it can. However in practice ageism in hiring is 100% observable.

        It’s basically the revealed preference of hiring managers. The seldom spoken reality is this: managers like them young and hungry with no external obligations; thus they’re maximally extractible.

        • Terr_4 hours ago
          > > can trick us into over-estimating

          > [it's] observable

          Nobody said it wasn't, I was quite deliberate in my wording.

          • drivebyhooting4 hours ago
            I’m not sure what point you were trying to make then.

            At face value your post implies ageism is not a problem. Change the target demographic to a different “minority” - racial, ethnic, diverse - and observe the heavy connotation carried by downplaying severity.

      • dapperdrake5 hours ago
        StackOverflow had a different take:

        There was exponential growth in newcomers and with different languages and hardware becoming available. Each wave drowned out all of the previous field combined. Like colloquial "Moore's law".

  • softwaredoug5 hours ago
    A counter example to Jevon's paradox is writing

    Arguably, with the increase in literacy, Jevon's paradox would say we need to hire more writers. Indeed, a lot more people DO write for their job.

    But its not like we went from a handful of professional, full-time scribes, to 10x professional full-time scribes. Instead, the value of being just a scribe has gone down (unless you're phenomenal at it). It stops being a specialized skill unto itself. It's just a part of a lot of people's jobs, alongside other skills, like knowing how to do basic arithmetic.

    Coding, like writing, becomes a part of everyone's job. Not a specialization unto itself. We will have more coders, but since everyone is a coder, very few are a captial C "Coder".

    • sp1nningaway4 hours ago
      Your assertion makes way more sense than the article, and might explain why I see many excellent programmers be so averse to AI. The value of an individual programmer goes down even though the value of programming increases. The 10x scribes were probably also pretty dubious of the printing press, even though it made writing more accessible and valuable.

      (Also me of two months ago would be shocked at how bullish I've become on LLMs. AI is literally the printing press... get a grip, me!)

      • softwaredoug4 hours ago
        I think the takeaway is we need a specialization beyond coding. Coding is part of the job, not the whole thing.
    • heliumtera5 hours ago
      Nobody has a need for some code. They want to steer their systems in a particular direction.

      Writing unikernels will probably not be part of an accountant or plumber job.

      Stupid automation and writing CSS probably won't be either, for different reasons, it's so stupid that a CSS expert was replaced yesterday

  • pizlonator5 hours ago
    > Assembly programmers became C programmers became Python programmers.

    Except that there are still a lot of assembly programmers.

    And even more C/C++ programmers.

    It's also likely that C/C++ programmers didn't become Python programmers but that people who wouldn't have otherwise been programmers became Python programmers.

    > At the well-specified end, you have tasks where the inputs are clean, the desired output is clear, and success criteria are obvious: processing a standard form, writing code to a precise spec, translating a document, summarising a report. LLMs are excellent at this

    Yeah

    > At the ambiguous end, you have tasks where the context is messy, the right approach isn’t obvious, and success depends on knowledge that isn’t written down anywhere

    Sounds like most programming

    Almost all of the programming I've ever done.

    > I’m arguing that the most likely outcome is something like “computers” or “the internet” rather than “the end of employment as we know it.”

    Yeah

  • zeroonetwothree6 hours ago
    This is the default belief we should hold based on historical technological changes. Any argument to either extreme requires a lot of evidence.
    • al_borland5 hours ago
      I’d be more apt to sit in the boring middle ground if all of the people hyping it up weren’t convincing the c-suite to lay off entire departments. Some wet blankets are needed to cool things down and get the general tone back to reality.
  • dfajgljsldkjag5 hours ago
    I feel like this is exactly what I see at work every day. It helps me write the boring boilerplate code faster but I still have to spend hours debugging the logic errors it makes. It is a great tool for productivity but it is definitely not going to replace the need for actual engineering thinking anytime soon.
  • pedalpete6 hours ago
    I work in health (neurotech/sleeptech) and am in the process of writing a post which hits on the health aspect.

    The things that most people ignore when thinking about AI and health is that 2/3rds of Americans are suffer from chronic illness and there is a shortage of doctors. Could AI really do much worse than the status quo? Doctors won't be replaced, but if we could move them up the stack of health to actually doing the work of saving lives rather than just looking at rising cholesterol numbers and writing scripts?

    • ceejayoz6 hours ago
      > Could AI really do much worse than the status quo?

      Yes? https://en.wikipedia.org/wiki/Politician%27s_syllogism

      • Quarrelsome5 hours ago
        It's encouraged me towards getting diagnosed for my probable ADHD. Now I'm mentioning this to people I know and they're all giving me:

        > you're undiagnosed? I thought it was obvious.

        guess I was the last to clock it.

        It was people that made me think of it first: a hookup that was adament I had it and then a therapist that mentioned in our first session. I started the diagnosis like over a year ago and completely forgot about it. Its only been asking gipity about some symptoms I have and seeing it throw up ADHD a lot as a possibility, that encouraged me to go back to sorting out the diagnosis.

        • ceejayoz5 hours ago
          I don’t doubt it can be helpful.

          We don’t have enough info to determine whether such anecdotes translate to widespread benefit or surprising consequences.

          • Quarrelsome5 hours ago
            I'm just suggesting it has a positive impact in preventative care by giving people an outlet to discuss their symptoms and consider possibilities. Obviously the trade off might be more hypochrondriacs but its good for people who are the opposite.
            • ceejayoz5 hours ago
              Yes. The question now becomes one of cost/benefit analysis between the two. Which is tough, and may take decades.
          • dapperdrake4 hours ago
            Because the floor is invisible. Raising it will make a difference for many crazy values of "raising".
        • galleywest2005 hours ago
          Prodding you to seek help from a doctor is different than what the OP was saying

          > Doctors won't be replaced, but if we could move them up the stack of health to actually doing the work of saving lives rather than just looking at rising cholesterol numbers and writing scripts

          I presume your AI assistant did not prescribe medication to you.

          • Quarrelsome5 hours ago
            sure but this is part of preventative care. I'm one of those people who are happier to shrug off symptoms than go through the effort of seeking medical diagnosis and I doubt I'm alone in this.
    • JohnFen6 hours ago
      There's a trust problem for that use case, though.

      I don't have a primary care physician because in the area I live in, there are no doctors that I can find that are taking new patients.

      Regardless, I wouldn't want any of my medical data exposed to an AI system even if that was the only way to get health care. I simply don't trust them enough for that (and HIPAA isn't strong enough to make me more comfortable).

      • pedalpete38 minutes ago
        This is common where I live also (Sydney, Australia).

        However, I'm not suggesting the existing AI systems. There are health specific platforms such as Superpower, or in Australia Everlab, which are doing the blood-work, early detection type stuff. Then if there is something to address that gets handed off to a doctor.

      • nephihaha6 hours ago
        The human element has already been lost in medicine in many cases, unless you are willing to pay a lot for it. Many people need that when they are sick. They want genuine support and something resembling sympathy.

        My friend died last weekend from cancer. Human support/contact was very important to her. AI can't do that.

        • JohnFen5 hours ago
          True. But a whole lot of that loss of the human element isn't about AI one way or another. It's about doctors being ridiculously overworked.
          • Zigurd5 hours ago
            I don't know if this is a common experience, but the first time I really needed a doctor, they spent the whole time typing on a laptop. Since that didn't result in any follow up questions or anything beyond a referral to a specialist, I suspect it was all about getting paid by the insurance company. There's a blindingly obvious fix to that part of overworked doctors.
            • JohnFen5 hours ago
              I know a couple of doctors and they both told me the same thing (this is likely dependent on exactly where you are): the amount of time they can spend with any given patient is less than 15 minutes. In practice, they have to "rob peter to pay paul" and try to minimize the amount of time they spend with patients who have lesser medical needs so they can spend more time with patients who have more complicated situations.
            • ceejayoz5 hours ago
              Some of it is justifying for insurance. But some of it is so there’s a record to refer to later when you come back.

              (Doctors will, for example, still tend to type plenty during an appointment in, say, the English NHS.)

            • onemoresoop5 hours ago
              OK, AI could be used transparently to fill out forms and write down what the doctor talks to into a microphone, assist in the health staff with some tasks in the form of interchangeable tools. What we don't need is another layer of blackbox magic making everything even more murky.
    • nhinck26 hours ago
      Yes because the act of "moving them up the stack" could have the opportunity cost of preventing real change that would actually improve health outcomes.

      AI could allow the whole system to kick the can down the road.

      • Zigurd5 hours ago
        It's almost never a good idea to go for a technology fix when there are other obvious defects that could be addressed.
    • toomuchtodo5 hours ago
      https://en.wikipedia.org/wiki/Deaths_linked_to_chatbots

      We should fix the shortage of healthcare practitioners, not hand folks a fancy search engine and say "problem solved." Would you put forth "Google your symptoms" as a solution to this same problem? The token output is fancy, the confidence in accuracy is similar.

  • ufo5 hours ago
    In my work as a professor, AI has demonstrated a noticeable disruptive impact for the worse.

    It has become difficult to grade students using anything other than in-person pen and paper assessments, because AI cheating is rampant and hard to detect. It is particularly bad for introductory-level courses; the simpler the material the hardest it is to make it AI-proof.

    • soperj5 hours ago
      It should make the university's current system untenable, which will be great for anyone who actually wants to learn at University. Cheating was rampant prior to this, hopefully they actually do something about it now.
    • JohnMakin4 hours ago
      > It has become difficult to grade students using anything other than in-person pen and paper assessments,

      This shouldn't be a big deal. This was the norm for decades. My CS undergrad I only finished ~10 years ago, and every test was proctored and pen and paper. Very, very rarely would there be a remote submission. It did not seem possible to easily cheat in that environment unless the test allowed notes you yourself did not write, or if you procured a copy of the test beforehand and were able to study off that previously, but the material was sufficiently rigorous that you sort of had to know it well to pass the class, which seems to me the whole aim of a college course.

      • ufo3 hours ago
        I was referring to other kinds of assessment such as exercise lists, take-home coding projects, technical writing, etc.
      • KPGv24 hours ago
        > This shouldn't be a big deal. This was the norm for decades.

        We need to hire more professors, then, as the ratio of FTE profs to FTE students is significantly lower, even over just a decade.

        Edit: But I agree. I've mentioned to my professor wife that there needs to be movement back to oral exams. Orals exams are graded, nothing else is. IT works for law school. One of the only things that works for law school. One exam at the end of the semester. Nothing else matters, because the only thing a class needs to measure is mastery of the material, not whether you are diligent at completing basic work with the help of textbooks and friends and the Internet.

    • sodapopcan5 hours ago
      Conversely, if it drives us back to pen and paper for many things, I see that as a win.
      • ufo3 hours ago
        An indirect effect though is that if we no longer dedicate a portion of the grade to homework, fewer students do the homework and then they crash in the written exam. (Students have always been very grade-motivated. If it's not worth points they'll deprioritize it.)
    • thot_experiment5 hours ago
      I think a big reason for the shitshow we're seeing in America is the continued systematic destruction of our education system and it definitely seems like AI is adding a considerable amount of fuel to the fire destroying our ability to think critically.
    • dapperdrake5 hours ago
      As someone who has also been there (close enough):

      What is your guess as to how this will be different from pocket calculators?

      • Quothling5 hours ago
        I'm not GP but I've been an external examiner for Danish CS students for longer than a decade. When I look at my previous gradings have matched the expected distribution nationally. There was a diviation during covid, but not compared to the national distribution which was lower all over the board. For the past year things have been very different. The trend is now that you see the same amount of good students, but you see almost no middle students. You have students who hand in great projects and a well written thesis. Who can't tell you very simple things about the work they've turned in. There is no real way to prove that these students cheat, but the study programme regulations are pretty clear when students can't answer questions about what they've written.

        When I look at this january's results it's all near top mark or near bottom or failed. Almost nothing in between, and my grades match what has been reported by other examiners so far.

        • dapperdrake4 hours ago
          Separate political pressures have already forced some departments to turn a blind eye to stuff like this. Even before LLMs.
      • bcrosby955 hours ago
        It already is different in the way teachers tend to care about. Kids learn the math that pocket calculators help you with before they have the capability and self-determination to find and use a pocket calculator. Pocket calculators aren't short circuiting any 7 year old's ability to learn basic addition.
        • 4 hours ago
          undefined
      • ufo4 hours ago
        The most frustrating part is that giving feedback on their essays or source code is a lot of work, which goes to waste if the student cheated.

        Unlike calculators, making an assessment slop-proof often demands more resources to grade it, be it because the assignment needs to be more complicated, or because it needs more teaching assistants, or more time allotted for oral presentations. I also shudder at the suggestions to just come up with assignments that assume the students will use AI assistance anyway. That's how you end up with Programming 102 students that can't code their way around a for loop.

      • sodapopcan5 hours ago
        You can't be serious...
        • dapperdrake5 hours ago
          Am serious about asking for the opinion of someone affected by a problem or phenomenon.
          • sodapopcan4 hours ago
            Ya, sorry, I'm in a mood.

            But logically, calculators only do math, and they have primitive inputs that aren't going to match exactly what is on the sheet for anything other than THE most simplest of equations. You can't talk to a calculator in natural language, you have to learn how to use one (kind of like a ahem programming language). I never found calculators helped me "cheat" at math, it was still hard.

            • dapperdrake4 hours ago
              Most of the contemporary mathematical proofs from LLMs read like first-semester failings. Doesn’t really help to cheat there, either.
              • sodapopcan4 hours ago
                Interesting! Though I will hold out for the "you're prompting it wrong" comments.
      • citizenpaul5 hours ago
        I almost think at this point anyone attempting to make this absurd connection is a paid shill.

        No AI it is not like calculators, looms, engines, or any other advancement.

        If AI continues to improve we will need a complete reset of how human society works. That will not happen without mountains of bodies. There are 2 main ways civilization re-balances when work/worker ratio becomes untenable. War or famine. Hope you and you loved ones are on the lucky side.

        • dapperdrake4 hours ago
          The idea was to get insights into the difference that I may not have thought about.
    • jklein115 hours ago
      You are a professor and unclear on the difference between harder and hardest?
      • thot_experiment5 hours ago
        There are professors in non english speaking countries it turns out.
      • aguacaterojo5 hours ago
        good chance he is a professor in a technical field and English is not his first language
        • sodapopcan4 hours ago
          Absolute possible. They also could have just made a typo while typing a comment on a casual internet forum, which is also perfectly acceptable.
  • JohnMakin5 hours ago
    > The measured take, that LLMs are a significant productivity tool comparable to previous technological shifts but not a rupture in the basic economic fabric, doesn’t generate much engagement. It’s boring.

    This isn't a new take. The problem is, "boring" doesn't warrant the massive bet the market has made on it, so this argument is essentially "AI is worthless" to someone significantly invested.

    It's not so much that people aren't making this argument, it's that it's being tossed immediately into the "stochastic parrot" bunch.

    • mandevil5 hours ago
      I will say that I'm not entirely clear on how the boring case- which I agree seems most likely- manages to pay for all this massive investment in datacenters, power plants, new frontier models, etc. OpenAI has already raised 60Gigabucks from people who are expecting a return on that money, and since it is not actually profitable yet it will need to raise more money to get to that point. I'm not clear on how they manage to make enough profit to pay off all that investment if AI is actually a "5-25% improvement in productivity for some classes of white collar workers" sort of proposition.
    • 5 hours ago
      undefined
  • holistio5 hours ago
    > software got written, and roughly similar numbers of people were employed writing it

    That's just simply not true.

  • ctoth5 hours ago
    The "what would change my mind" criteria are all about AI succeeding within existing institutional forms. He's not considering whether those forms get outcompeted by smaller entities that don't carry the overhead, which to me seems obviously how it will go.

    > If agentic systems start successfully navigating the illegible organisational context that currently requires human judgement, things like understanding unstated preferences, political sensitivities, and implicit standards, that would be significant.

    How much of this is actually required for the actual work though and how much is legacy office politics "monkeys love to monkey around" nonsense?

  • robotresearcher4 hours ago
    > The historical track record on “this technology will end work” predictions is remarkably consistent: they’re always wrong.

    This is nicely expressed, and could serve as a TDLR for the article, though buried in the middle.

    We have the most automation we've ever had, AND historically low unemployment. We have umpteen labor saving devices, AND people labor long, long hours.

    > Labour Market Reallocation Actually Works

    It really does, given a little time.

  • chr15m4 hours ago
    > the binding constraint was never syntax but the underlying skill of thinking precisely about systems, edge cases, state management, and failure modes.

    Yes, this is still programming.

    (Though I think syntax was most definitely a binding constraint.)

  • citizenpaul5 hours ago
    On "oddlots" a financial podcast they frequently mention that there is no good exit for AI at this point. Either it massively disrupts industries and the average person is driven closer to poverty, or it does nothing and the historical amount of money wasted will lead to a depression or recession that drives the average person closer to poverty.
    • dapperdrake4 hours ago
      This is what "chasing the next Google and missing" looks like.
  • heliumtera5 hours ago
    AI is pretty good at finding relationship between concepts/artifacts, like, I know there must be a way to get a handle of a file and stream the contents, and I couldn't care less about how some fucker decided to arbitrarily define in this language o would never choose to use but am stuck with. Just give me whatever bullshit that is equivalent to what I want, fantastic technology for that.

    Precisely because of this, some people that couldn't code for whatever reason crossed the border and now can somewhat produce something substantially more complex then what they can comprehend. This causes problems. You probably should not shit out code you don't understand, or rely on code you don't understand.

  • citizenpaul4 hours ago
    I think lots of SWE massively overestimate the amount that people in power care if the software really works most of the time. Yes even in financial matters. Here is a not secret, Accounting will stop any strange money loss that would cause any problems, its their job. If it happens often enough they will complain and have an SWE fix it. There is a reason so many companies take forever to pay out.

    How much of the world currently runs on horribly broken,outdated and inaccurate spreadsheets? Disposable AI slop apps are just a continuation of this paradigm. I do think that AI slop apps will become competitive with broken spreadsheets and 10,000 scattered python notebook files, causing a massive drop in need for various SWE,Analysis, Data Scientist type jobs. I've seen report systems with individual reports in the millions that were only run once (and many not at all), a huge percent of digital work is one off disposable items.

    SWE is first and foremost a power structure for the people in charge. The first thing you do when you are in power is always minimize the amount of people with power you depend on. I think that AI already is reducing the amount of people needed to maintain and develop what are basically state of the company applications for the C-suite. Sure tech first companies will be hit less hard but for example the fortune 500 will be making continuous cuts for the next decade.

  • ath3nd5 hours ago
    [dead]