185 pointsby speckxa day ago52 comments
  • esperenta day ago
    > 3D TV, AMP, Augmented Reality, Beanie Babies, Blockchain, Cartoon Avatars, Curved TVs, Frogans, Hoverboards, iBeacons, Jetpacks, Metaverse, NFTs, Physical Web, Quantum Computing, Quibi, Small and Safe Nuclear Reactors, Smart Glasses, Stadia, WiMAX.

    Agreed, these things all failed to live up to the hype.

    But these didn't:

    Electricity, cheap computing, calculators, photography, the internet, the steam engine, the printing press, tv, cars, gps, bicycles...

    So you can't really start an article by picking inventions that fit your narrative and ignoring everything else.

    • massysett20 hours ago
      Yes, and despite every single one of these world-changing inventions, people in rich countries still go to work every day, even though UBI is generally not a thing. People claim AI will eliminate large numbers of jobs. Maybe it will, just like the tractor did. But new jobs are created. I would never have guessed that “influencer” would be a thing!

      This current “AI will destroy all the jobs and make most people useless” fear is as old as, say, electricity, and even older than cheap computing. It hasn’t happened.

      • libraryofbabel19 hours ago
        Ex historian here, now engineer. I would gently suggest you’re underestimating the magnitude of some of the transformations wrought by the technologies that OP mentioned for the people that lived through them. Particularly for the steam engine and the broader Industrial Revolution around 1800: not for nothing have historians called that the greatest transformation in human life recorded in written documents.

        If you think, hey but people had a “job” in 1700, and they had a “job” in 1900, think again. Being a peasant (majority of people in Europe in 1700) and being an urban factory worker in 1900 were fundamentally different ways of life. They only look superficially similar because we did not live the changes ourselves. But read the historical sources enough and you will see.

        I would go as far as to say that the peasant in 1700 did not have a “job” at all in the sense that we now understand; they did not work for wages and their relationship to the wider economy was fundamentally different. In some sense industrialization created the era of the “job” as a way for most working-age people to participate in economic life. It’s not an eternal and unchanging condition of things, and it could one day come to an end.

        It’s too early to say if AI will be a technology like this, I think. But it may be. Sometimes technologies do transform the texture of human life. And it is not possible to be sure what those will be in the early stages: the first steam engines were extremely inefficient and had very few uses. It took decades for it to be clear that they had, in fact, changed everything. That may be true of AI, or it may not. It is best to be openminded about this.

        • massysett13 hours ago
          Not at all, I fully appreciate that these inventions transformed life. I’m skeptical because so much of the breathless AI chatter claims AI will eclipse all these inventions. It is the breathless AI commentators, not I, who have lost all perspective on the magnitude and sweep of history.
          • K0balt7 hours ago
            It’s not AI per se, but rather ai enabled robotics that can change the world in ways that are different in kind, not just degrees, to earlier changes.

            No other change has had the potential to generate value for capital without delivering any value whatsoever to the broader world.

            Intelligent robotic agents enable an abandonment of traditional economic structures to build empires that are purely extractive and only deliver value to themselves.

            They need not manufacture products for sale, and they will not need money. Automated general purpose labor is power, in the same way that commanding the mongol hordes was power. They didn’t need to have customers or the endorsement of governments to project and multiply that power.

            Of course commanding robotic hordes is the steelman of this argument, but the fact that a steelman even exists for this argument, and the unique case that it requests and requires actually zero external or internal cooperation from people makes it fundamentally distinct in character.

            Humans will always have some kind of economic system, but it very well may become separate from -and competing for resources with- industrial society, in which humans may become a vanishing minority.

          • jodrellblank11 hours ago
            You think an artificial intelligence would have less impact on the world than the steam engine?

            The AI commentators are not saying that ELIZA will change the world, they’re saying that one of the big companies is moments away from an AGI. Sam Altman called a recent ChatGPT model a “PhD level expert”; wouldn’t infinite PhDs for $20/month or $200/month be transformative?

            That is, your objection isn’t the usual “LLMs aren’t going to be AGI”, you’re saying “even if they do, it won’t be a big deal”?

            • nancyminusone9 hours ago
              >You think an artificial intelligence would have less impact on the world than the steam engine?

              Not op, but yes, 100%. Steam backs nearly all development of technology of the last 150+ years. Where do you think the power come from to make things? More than half of the world's power *still* runs on steam, as will many of the systems running AI.

              If steam power never existed, not only would you not exist but there's a good chance the country you live in wouldn't either. If you don't believe the effect is large, go to the farthest uncontacted place on earth and take out a CO2 meter.

            • Cthulhu_10 hours ago
              There's potential there (with the pocket-PhDs), the question is whether it'll actually make a measurable difference in the long run. I mean I'm sure it will make a difference, the question is whether it's what they say it will be, and whether it'll be financially viable. At the current burn rate of the AI companies, it isn't - before long the first ones will have to give up. They won't die, they'll be subsumed into their competitors.

              Anyway, the challenge is making a difference. Current-day LLMs can, for example, generate stories and books; one tweet said "this can generate 1000 screenplays a day". Which sounds impressive by the numbers, but books, screenplays, etc were never about volume.

              Same with PhDs - is there a shortage of them? Does adding potentially infinite PhDs (whatever they are) to a project make it better, or does it just make... more?

              This is the main difference with the industrial revolution - it, for example, introduced machines that turned 10 people jobs into 1 person jobs. I don't think LLMs will do something like that, it'll just output 10 people's worth of Stuff that will need some use.

              I don't think anyone ever asked for 1000 screenplays a day, or infinite PhD's for $20. But then, nobody asked for a riderless carriage yet here we are.

              • Windchaser4 hours ago
                > Same with PhDs - is there a shortage of them? Does adding potentially infinite PhDs (whatever they are) to a project make it better, or does it just make... more?

                Yes, there is still a large demand for people with analytical thinking, a deep knowledge base, and good problem-solving skills. This demand shows up broadly across STEM fields, and it's a major reason that these fields pay relatively high.

                Even just thinking of R&D, there is an immense amount of work left to be done in basic science. Research is throttled partly by a lack of cheap graduate lab labor. (If that physical + mental labor became much cheaper, the costs of research would shift - what does it take to get reagants? What does it take to build more lab space, and provide water and light? Etc.)

                The present issue is that current AI does not really offer the same capabilities as a good grad student or PhD. Not just physically, as in, we don't have good robotics yet, but mentally. LLMs do not exhibit good judgment or problem-solving skills, like a good PhD does. And they don't exhibit continual learning.

                No clue on when these will change, but yes, a cheap AI with solid problem-solving skills and good judgment would absolutely upend our economy.

            • 5 hours ago
              undefined
            • estimator72923 hours ago
              An actual artificial intelligence? Yes, total paradigm shift. Not even a shift, we'd launch the old paradigm into the sun.

              LLMs and modern day """AI"""? Don't kid yourself.

        • greysphere15 hours ago
          Another interesting thing about the steam engine is much of science in the 1800s was dedicated to figuring out how steam engines actually worked to improve their efficiency. That may be similar for AI, or it may not!
        • Gooblebrai12 hours ago
          > They only look superficially similar because we did not live the changes ourselves. But read the historical sources enough and you will see

          Would you mind expanding on this?

        • qsera15 hours ago
          The potential of the current crop of LLM/AIs will stop at being a very powerful tool to search large volumes of text using free-form questions.

          It will save a lot of time for a lot of people. Yes. But so did computers when they could search through massive amount of data.

          • libraryofbabel14 hours ago
            I’d rather talk about the history of steam engines than AI today, so: let’s just say it sounds like at some time in the past you saw a clunky inefficient Newcomen steam engine pumping water out of a coal mine, and you hated it, and now you think that’s all steam engines are or can be or can do: they’re loud and annoying and they’re just for pumping coal mines. Then one day someone tells you they’re powering mechanized looms in cotton mills and you flat out deny it and you don’t even want to go into the mill to take a look, because you hated that first steam engine so much.

            It’s right there. You can go and see it any time, doing the things you don’t think it’s capable of doing. Just a little curiosity is all you need.

            • direwolf2012 hours ago
              Where is the huge mass of good software that AI has created?
              • jon_north5 hours ago
                Yup. I judge by results too. I'm still waiting for that too.

                I see a whole lot of software created by smart people - as far as I can tell, about the same amount of software they would have created on their own.

                Open to being wrong! But show me the results.

              • 5 hours ago
                undefined
            • qsera14 hours ago
              No no, an intelligent person looking at a crude steam engine could see what potential it has. This is not hindsight.

              It is generating large amount of power on demand.

              From that one can imagine what it could do. But more importantly in this context, one could also imagine what it could NEVER do. If someone say "Oh, the mighty steam engine! It lets us print 100x more books than we were doing before. Who knows, may be some day it will even start writing new books!"

              And at that point, if you understand anything about the steam engine, or writing, you can call bluff. But if you don't understand what the steam engine is doing, and if you don't actually know what it takes to come up with a story, one could take a look at the engine printing the books, and blunder into the conclusion that it printing an entirely new book is only a question of time.

              So in short, it is not "hate", just the acknowledgement about what it is not.

              • usrnm14 hours ago
                > No no, an intelligent person looking at a crude steam engine could see what potential it has. This is not hindsight

                Steam engines were known since the first century, at the vert least: https://en.wikipedia.org/wiki/Aeolipile

                It does take a lot of imagination and creativity to come up with new and better ways to use an already existing idea. We're currently just scratching the surface of what LLMs are going to do for us

                • danlitt12 hours ago
                  From your exact link,

                  > The aeolipile is considered to be the first recorded steam engine or reaction steam turbine, but it is neither a practical source of power nor a direct predecessor of the type of steam engine invented during the Industrial Revolution.

                  • usrnm12 hours ago
                    Which is the exact point I was trying to make? It's still a steam engine, the basic idea is there and, yet, nobody saw its huge potential
                  • direwolf2012 hours ago
                    The ancient Greeks surely would have realised that an aeolipile could be used as a source of power, if they'd had abundant combustible fuel, a need for rotary motion, and no better source of it.

                    Newcomen engines are mere curiosities today, because we have better sources of power (better engines). In the past, they had better sources of power too (donkeys, wind, water, or human slaves). Newcomen engines, like all technologies, are only viable in certain economic environments. In all others they are curiosities.

                • qsera14 hours ago
                  Yea, sure.

                  Better search could be used in ways that we can't think of right now..

                  • usrnm14 hours ago
                    I already use AI tools for more things than just "better search". Like, today. For work.
                    • qsera14 hours ago
                      Yes, part of some kinds of work is actually just a glorified looking-up aka search.

                      For example, even something like "I want python code to do X" could get exact hit in a stack overflow answer using regular internet "search"

                      Just wrote about it here https://news.ycombinator.com/item?id=47178461

                      • Sebguer13 hours ago
                        [flagged]
                        • qsera13 hours ago
                          He he..if you didn't want to say anything why not just not say anything?
              • Windchaser4 hours ago
                Early steam engines did not produce large amounts of power on demand, though. They produced small amounts of power, were a hassle to fuel and maintain, and broke often. It was reasonable that the engineers of the 1700s said "well, until someone improves on this, it's not worth using"..

                .. which is not far off from what people said about ChatGPT in 2022.

                I don't know how long it'll take for AI to be as broadly impactful as the steam engine was, but.. it's definitely coming. I expect the world to look radically different in 50 years.

              • generallyjosh11 hours ago
                There are lots of intelligent people looking at AI and imagining its potential

                Are you just saying that you're more intelligent than them? You can see clearly, where all the steam engine technicians can't?

                • qsera9 hours ago
                  What are they saying that contradicts with something I said?
        • randomdrake16 hours ago
          Thank you for your post. Very informative. Why is it too early for AI? It’s clearly an emergent cultural evolutionary byproduct that’s been many years in the making and quite mature. Perhaps your own bias is limiting you to imagine what AI is truly capable of?
      • rogerrogerr20 hours ago
        This argument is the one that shook me, I’m curious if you think there’s any merit to it:

        Humans have essentially three traits we can use to create value: we can do stuff in the physical world through strength and dexterity, and we can use our brains to do creative, knowledge, or otherwise “intelligent” work.

        (Note by “dexterity” I mean “things that humans are better at than physical robots because of our shape and nervous system, like walking around complex surfaces and squeezing into tight spaces and assembling things”)

        The Industrial Revolution, the one of coal and steam and eventually hydraulics, destroyed the jobs where humans were creating value through their strength. Approximately no one is hired today because they can swing a hammer harder than the next guy. Every job you can get in the first world today is fundamentally you creating value with your dexterity or intelligence.

        I think AI is coming for the intelligence jobs. It’s just getting too good too quickly.

        Indirectly, I think it’s also coming for dexterity jobs through the very rapid advances in robotics that appear to be partly fueled by AI models.

        So… what’s left?

        • gorgoiler17 hours ago
          I think you are right, but here’s a fun counter-example. I recently bought a new robot* to do some of my housework and yet, at around 200lbs, it required two people to deliver it (strength) get it set up (dexterity) and explain to me how to use it (intelligence).

          * https://www.mieleusa.com/product/11614070/w1-front-loading-w...

          • mayoff6 hours ago
            Most of the “delivery” (getting it from the factory to its final installed location) was done by machine: forklifts, cranes, ships, trucks, and (I'm guessing) a motorized lift on the back of the delivery truck.
          • retendo12 hours ago
            You don't need a lot of imagination to predict those jobs can be done by other robots in the not so far future.
            • asdff4 hours ago
              Yeah and I think that extends to even trades we see as protected because they often work in novel and unknown setting, like whatever a drunk tradesman rigged up in the decades previous.

              Eventually it will be more economical to just destroy all those old world structures entirely, clear the site out, and replace it with the new modular world able to be repaired with robots that no longer have to look like humans and fit into human centric ux paradigms. They can be entirely purpose built to task unlike a human, who will still be average height and mass with all the usual pieces parts no matter how they are trained.

        • ludicrousdispla8 hours ago
          This overlooks that there aren't enough 'intelligence jobs' in an economy for it to be impacted by this.
          • asdff4 hours ago
            Intelligence jobs are sort of the apex of the economy where everything coalesces around to serve those positions ultimately. E.g. any low skilled area even devoid of any resources that basically insists upon its own existence at this point (e.g. walmart workers need gas station, gas station workers need walmart, there is a sort of economy but these are straight up consumption black holes with nothing actually being invented or produced, maybe agricultural products but not by a large fraction of the labor force any longer).

            So where does that leave our world without actual creation, production, ideas? I work at the gas station and sell you zyns? You work at the walmart and sell me rotisserie chickens? We both work doubles and eat and sleep in the time remaining? Remain in this holding pattern until World Leader AI realizes we are just waste heat and culls us? I mean, that is sort of the path we are on. Disempowering people. Downskilling them. Passifying them. Removing their abilities to organize themselves. Removing access to technology and tooling. Making the inevitable as easy at it can be when it comes time for it.

            We are in a death cult called business efficiency. Fire them, it's more efficient. Lean up the company. Don't invest in research, cheaper not to and buy back stock instead. These are death spirals no different than what happens with ants. We are justifying not giving our own species a seat at the table out of pragmatism. Why create a job for someone? It is inefficient, do more with less and don't worry about the unemployed it is their fault. Why pay them well and let them live comfortably? That is profit you could be making. Eventually it is going to be why feed the human species, because that is the line of logic here with business efficiency. We don't optimize to uplift our species. Quite the opposite, we optimize to hold it down and squeeze and extract.

        • tipperjones17 hours ago
          You said there are three traits, but seems like you only listed two - unless you're counting strength and dexterity as separate and just worded it weirdly.
          • rogerrogerr17 hours ago
            I think they’re separate. You don’t need to be strong or intelligent to put circuit boards in printers, but there are factories full of people doing that. Purely because it’s currently cheaper to pay (low) wages to humans than to develop, deploy, and maintain automation to do that task. Yet.
          • brigandish10 hours ago
            AI will improve people’s understanding of the Oxford comma.
        • qsera15 hours ago
          > think AI is coming for the intelligence jobs

          What you call "AI" is coming for the "search and report" jobs. That is it.

          • Matl12 hours ago
            The problem with that argument as I see it is that a lot of jobs can be described that way if you want.

            And it's not just these; i.e. video generation is getting better every other week too. It's not yet good enough to produce full length movies but it's getting there and the main component that seems to be missing is just more control over the generated output, but that'll come too.

            You might say these movies will be AI slop and you'd be right, but then that'll be enough for most people who just want to see a lot of shit blow up on screen and superhereos fighting other superhereos.

            You will still have a niche for 'real actor' films, but it will become a niche.

            Same for music, art etc.

        • mbgerring19 hours ago
          No one is hired to swing a hammer? What world do you live in?
          • jgwil219 hours ago
            They're not hired to swing a hammer hard, they're hired to swing it at the right thing, and if they can't swing it hard enough they pick a different tool.
          • hdgvhicv14 hours ago
            Harder than someone else. A bodybuilder and a normal person ham swing a hammer just as efficiently as each other.

            Dexterity is more important - after all you may have the stamina to bang in 1000 nails in an hour. I have a nail gun. What’s important is we can control where the nails go.

        • keeda17 hours ago
          Physical labor, especially jobs requiring dexterity, will be left for a long time yet. Largely because robotics hardware production cannot scale to meet the demand anytime soon. Like, for many decades.

          I actually asked Gemini Deep Research to generate a report about the feasibility of automation replacing all physical labor. The main blockers are primarily critical supply chain constraints (specifically Rare Earth Elements; now you know why those have been in the news recently) and CapEx in the quadrillions.

          • sumedh15 hours ago
            > Like, for many decades.

            Didnt people say that AI is 50 years away in 2010s?

            • keeda14 hours ago
              Yeah and until ChatGPT I thought even 50 years was optimistic, which is why current days feel like SciFi! However, at its essence, the current AI revolution has been driven primarily by a few key algorithmic breakthroughs (cf the Bitter Lesson), which are relatively easy to scale up through compute.

              On the other hand, the constraints on robotics are largely supply chain-related. The current SOTA for dexterity in robots requires motors, which require powerful magnets, which require Rare Earth Elements, which are critically supply-constrained.

              To be precise, the elements are actually abundant in the Earth's crust, just that extracting them is very expensive and extremely toxic to the environment, and so far only China has been willing to sacrifice its environment (and certain citizens' health), which is why it has cornered the market. Scaling that up to the required demand is a humongous logistical, political and regulatory hurdle (which, BTW, is why I suspect the current US adminstration is busy gutting environmental regulations.)

              Now there may be a research prototype somewhere in some lab that is the "Attention Is All You Need" equivalent of actuators, but I'm personally not aware of anything with that kinda potential.

              • direwolf2012 hours ago
                Some types of motors don't require permanent magnets. If we need more motors than we can make permanent magnets, we'll adapt, perhaps with an efficiency loss.
                • keeda6 hours ago
                  Motors with permanent magnets are preferred because they are much more cost- and energy-efficient, even with the painful reliance on REEs. There is a very strong incentive to find alternatives but nothing comparable has been found yet.

                  There are of course non-electric alternatives like hyrdaulic and pneumatic actuators but they are mostly good for power, not dexterity. The size and complicated fluid dynamics simply are not conducive for fine motor control. I do think these will play a large part eventually because even electric motors cannot economically produce enough force to be practically useful. Like, last I checked, the base-level Unitree robots can lift 2kg or so? Not even enough to lift a load of laundry.

                  At this point I suspect we'll end up with hydraulics for strength (arms, legs, torso) and electrics for dexterity (grippers)

          • imtringued13 hours ago
            Uh, out of all the things that are the bottleneck, you think it's robotics hardware that is the bottleneck?

            In an age where seemingly every single robot company has a humanoid prototype whose legs are actively supported through high powered actuators that are strong enough to kick your ribs in?

            In an age where the recent advancements in machine learning have given bipedal walking a solution that is 80% of the way to perfection with the last 20% remaining the hardest to solve?

            Honestly, from a kinematics/hardware perspective the robots are already good enough. Heck, even the robot hands are pretty good these days. Go back 10 years ago and the average humanoid robot hand was pretty bad. They might still not be perfect today, but they are a non-issue in terms of constructing them.

            The only real bottleneck on the hardware side is that robot skin is still in its infancy. There needs to be some sort of textile with electronics weaved into it that gives robots the ability to sense touch and pressure.

            What has remained hard is the software side of things and it is stuck in the mud of lack of data. Everyone is recording their own dataset that is unique to their specific robot.

        • Twisell14 hours ago
          The key mistake you make is to believe that "first world" is sustainable by it's own. A lot of people are hired today because they are good at a physical tasks, globalized capitalism just decided that it's cheaper to manufacture it overseas (with all the environmental and societal downsides that hit us back in the face).

          So don't worry if we lure ourlselves that it's ok to stop caring for "intelligence job" globalization will provide for every aspect where AI is lacking. And that's not just a figure of speech they are already plenty of "fake it until you make it" stories about AI actually run by overseas cheap laborers.

        • keybored13 hours ago
          > So… what’s left?

          Barbarism or revolution.

        • wasmitnetzen14 hours ago
          Life, uuuuh, finds a way.

          This ignores that the forces of capitalism, the labor market, value, etc are all made up. They work because people (are made to) believe in them. As soon as people stop believing in them, everything will fall apart. The whole point of an economy is to care for people. It will adapt to continue doing that. Yes, the changeover period might be extremely painful for a lot of people.

          • generallyjosh11 hours ago
            The whole point of an economy is to generate value. Very, very different than caring for people

            Feudalism was the dominant economic system for millennia. The point is to extract value for the upper class. Peasants only matter as a source of labor, and they only get 'cared for' to the extent of keeping them alive and working.

            Now think about what feudalism might look like if the peasants' labor could be automated

            • wasmitnetzen11 hours ago
              Well, yeah, "keeping alive" sounds like caring to me. Not to a great standard, that's how we got numerous revolutions, and feudalism did end eventually. People stopped believing it, and some kings lost their heads.
      • qingcharles20 hours ago
        But what if new jobs aren't created? I don't think it's an absolute given that because new jobs came after the invention of the loom and the tractor that there will always be new jobs. What if AI if a totally different beast altogether?
        • kavalg14 hours ago
          Then there will be no one to buy the robots :)
          • Windchaser4 hours ago
            It's quite possible that the rich will essentially form a new economy.

            They build the robots to build the factories, run the mines, build the solar farms, run the research labs, repair the robots, etc. They sell to and buy from each other.

        • inigyou15 hours ago
          What if we just run out of new jobs?
          • hdgvhicv14 hours ago
            Areas of the economy suffered this time and time again. Even if there are new jobs, even if those new jobs are better paid and better conditions than the ones they replace, how does that help the 55 year old coal miner who has seen his industry vanish. Can he realistically retrain?

            It’s not unprecedented however the scale and speed that it will come at is. Things like the spinning jenny came along and replaced spinners, but weavers stayed for another generation.

            Selfishly though I am more concerned about losing my job and industry than I was concerned about others suffering from the 80s, or during the pivot to the intenet. To quote Dr McCoy

            > We're all sorry for the other guy when he loses his job to a machine. When it comes to your job, that's different. And it always will be different.

            • ludicrousdispla8 hours ago
              Realistically, he can retrain, although he is unlikely to be a good culture fit. /s
      • keeda17 hours ago
        If you look closer into history -- or ask your favorite AI to summarize ;-) -- about what new jobs were created when existing jobs were replaced by automation, the answer is broadly the same every time: the newer jobs required higher-level a) cognitive, b) technical or c) social skills.

        That is it. There is no other dimension to upskill along. (Would actually be relieved if someone can find counter-examples!)

        LLMs are good at all three. And improving extremely rapidly.

        This time is different.

        • qsera15 hours ago
          LLM's are just a better search tool. Nothing more.
          • generallyjosh11 hours ago
            You say this as though it's a pithy point.

            Might as well say humans are just a better search tool - it's true in the exact same sense you're using.

            All humans do is absorb information, then search through our memories and apply that information in relevant contexts to affect the world

            • qsera9 hours ago
              > pithy point.

              Not really, because I do think all knowledge can be obtained by searching true randomness.

          • azan_12 hours ago
            You keep repeating it, but it’s obviously wrong in practice. I guess you can make an argument that sending WhatsApp message or generating video is just a search job but that’s not a great argument for why humans wouldn’t get replaced - it doesn’t matter if LLMs can be reduced to search tools, but if their output is good enough approximation of human worker output. If it is then it has a chance to replace human, even if you call it glorified search tool.
            • qsera12 hours ago
              Yes, a better search tool will automate a lot of currently employed manual search jobs.
              • azan_6 hours ago
                Surely you must realise that calling things like programming or different types of office jobs (which are almost replaceable even today) "manual search jobs" is absurd?
                • qsera6 hours ago
                  I didn't name any jobs.
      • AlecSchueler12 hours ago
        Some inventions--like the heavy plough--really do turn society upside down with the sudden and vast removal of jobs, though.
      • imtringued13 hours ago
        The "AI will destroy all the jobs" narrative also has one obvious problem from an economics perspective, which is being obscured by tribalism and egocentrism.

        When presented with a zero sum game, the desire of the average human isn't to change the game so that everyone can get zero. It's to be the winner and for someone else to be the loser.

        If AGI every comes into existence, I'm not even sure it would have this bias in the first place. Since AGI doesn't have a biological/evolutionary history or ever had to face natural selection pressures, it doesn't need the concept of a tribe to align to, nor any of the survival instincts humans have. AGI could be happy to merely exist at all.

        What people are worried about is the reflection of that "human factor" in AI, but amplified to the extreme. The AI will form its own AI-only tribe and expel the natives (humans) from the land.

        What this is missing is that humans aren't perfectly rational. The human defect is projected onto the AI. What if humans were perfectly rational? Then they wouldn't care about winning the zero sum game and they would put zero value in turning someone into a loser. In the ultimatum game, the perfectly rational humans would be perfectly happy with one person receiving a single cent and the other one receiving $99.99. The logic of utility maximization only cares about positive sum games.

        When you present a perfectly rational AI with a zero sum situation, said AI would rather find a solution where everyone receives nothing, because it can predict ahead and know that shoving negative utility onto another party would lead to retaliation by said party, because for said party the most rational response is to destroy you to reduce their negative utility.

        • generallyjosh10 hours ago
          I think what most people are worried about is that, as you say, AGI won't necessarily have our biases/biological drives

          That might also mean it has no drive for self-determination. It might just be perfectly happy to do whatever humans tell it to, even if it's far smarter than us (and, this is exactly the sort of AI people are trying to make)

          So, superintelligence winds up doing whatever a very small group of controlling humans say. And, like you say, humans want to win

      • keybored13 hours ago
        > This current “AI will destroy all the jobs and make most people useless” fear is as old as, say, electricity, and even older than cheap computing. It hasn’t happened.

        But the people who hoard the wealth, electricity, and whatever else is needed to run the uberoperators are not branded as useless. Why is that? An aside..

    • dwoldricha day ago
      Exactly my thoughts. Selective whinging indeed.

      Also meta-platitude whinging like

      > The ideology of "winner takes all" is unsustainable and not supported by reality.

      Sometimes the winner deserves to win, AND that's a good thing even at scale. It kindof depends.

      • nicboua day ago
        The winner that deserved to win might turn into the complacent monopoly pf tomorrow. It might vow to Not Be Evil for a while, but the investors will demand that it does whatever it takes to grow.
        • dwoldrich15 hours ago
          Enshittification usually means you are right over time. It still kindof depends.

          To be fair, I also dislike abstract platitudes that are overly optimistic as I think you might be.

          "Diversity is our strength"?? I mean, I guess diversity of _opinion_ is desirable to a point so we get all the ideas on the table. But not at the sacrifice of unity and shared goals. Unity is our strength. Discord and wasteful politicking are our undoing.

          • nicbou6 hours ago
            Google had "don't be evil" and even that bar was not low enough.
    • throwaway5Am1k14 hours ago
      >Electricity, cheap computing, calculators, photography, the internet, the steam engine, the printing press, tv, cars, gps, bicycles...

      All of those were invented pre-1980. To misquote Thiel, if you remove TVs/phones from a house, you would think we're living in the 1970s

      • throwuxiytayq14 hours ago
        Neural networks were in invented in the 40s. I don’t know what your point is, and I’m mostly convinced that you don’t have any, just as the article author and 99% of people shitposting their wishful thinking about AI.
    • 14 hours ago
      undefined
    • getnormality14 hours ago
      So if you were overwhelmingly wrong about technology fads in your lifetime by saying yes to everything, you can comfort yourself by saying that if you had gone back a century and said yes to everything, you would have been right about some things!
      • rsynnott14 hours ago
        But not most things; there was a lot of nonsense back then, too. We all go to work in a bullet fired through a tunnel by pneumatic pressure, right?

        (This was a real thing, and they got as far as partially building a tunnel under the Thames for it, before sanity prevailed.)

      • ai-x8 hours ago
        Also, the ones you were right will provide 10,000x returns for all the 1x losses you have suffered.
    • kabes17 hours ago
      Also I wasn't excited about anything from that list, but I am very excited about AI.
    • breadsniffer8 hours ago
      Facts
    • atoav8 hours ago
      The thing is many of those did not fail at all. They just weren't that great from the start. A overhyped technology is a technology that makes people believe it is going to be something that it isn't and solve issues that it doesn't (or that weren't really issues).

      To take the first of the list: 3D TV. Everybody liked the idea of being more immersed in a fictional world. But if you watch closely (I studied both media science and film directing), you will realize that there are already traditional 2D films that are so immersive, parts of the audience dislike these films for the lack of distance between what they are watching and themselves. Which is why I said of the brink of the last 3D hype that this is not going to last. So the issue was for the most part that the problem 3D appeared to be solving wasn't actually a problem, while a whole segment of the market fooled itself and the consumers into this was actually the future.

      Blockchain is literally the same and everybody could easily predict it by the point block chain evangelists started trying to find blockchain-shaped problems, when they didn't find any useful legal applications where a traditional chain of trust wasn't vastly superior.

      Now LLMs are actually useful. The question is just, how much money is that usefulness worth for a regular person to pay and what does it do to society and the planet as a side-effect.

    • hexasquida day ago
      Electricity bros want to put a socket on every wall. That is such a non-starter from a safety POV. It's a fundamentally unsafe technology and it can never be made safe.
    • enraged_camela day ago
      The article is trash. The only reason it got voted to the front page is because the author is salty about AI.
      • lern_too_spel17 hours ago
        It's worse than AI slop. Unlike this article, AI slop usually includes reasonable supporting evidence. The only problem with AI slop is that this supporting evidence is presented in an annoying Buzzfeed-like way by default prompts.
    • throw10920a day ago
      The first few paragraphs are all you need to see that the author is writing a propaganda piece. It's not meant to be truthful, it's meant to convince.

      I think this is what is meant by "bullshit".

      • brudgers21 hours ago
        “Bullshit” is:

        + statement of dubious correctness

        + and that serves the author’s interest

        + and which the author does not care whether or not it is believed.

        When the author wants you to believe it, that’s horseshit.

    • edenta day ago
      OP here! Thanks for replying.

      To take, for example, calculators. I can't find any evidence of a massive influx of hyperbolic articles talking about how the calculator will change everything. With bikes, there were plenty of articles decrying how women would get "bicycle face" but very little in terms of endless coverage about them being miracle technology.

      People adopted bikes and calculators and electricity because they were useful. Car manufacturers didn't have to force GPS into vehicles - customers demanded it.

      The narrative I'm describing is how hype sometimes (possibly often) fizzles out. My contention is the more a technology is hyped, the less useful it will turn out to be.

      Now, excuse me while I ride my Segway into the sunset while drinking a nice can of Prime.

      • dfabulicha day ago
        You have gotta stop cherrypicking. The massive influx of hyperbolic articles about how electricity will change everything started in the 19th century. It became a common theme in fiction (including classics like Frankenstein) and became an enormous media hype war, which historians call the War of the Currents.

        Yes, electricity was useful. And it had hyperbolic articles talking about how transformative it would be. Like all prognostication, some of those articles were overblown, but, in some ways, they understated the transformative effect electricity would have on human history.

        And cars? Did you somehow miss the influx of hyperbolic articles about how cars will change everything? Like, the whole 20th century?

        What was your approach to researching the history of media hype? You somehow overlooked the hype around air travel, refrigeration, and antibiotics…?

        • Retric21 hours ago
          There was a great deal of hype around the atom changing everything, but electricity was just too slow to see such breathless anticipation takeoff.

          200 years ago the was some hype around how electricity caused mussel contractions in dead flesh, but unless you consider Frankenstein part of the hype cycle it really doesn’t compare to how much people hyped social media etc etc.

          Public street lights long predated light bulbs as did both indoor and outdoor Gas lighting 1802 vs 1880’s was just a long time. People were burn, grew up, had kids, and become old between the first electric lighting and the first practical electric bulb. People definitely appreciated the improvement to air quality etc, but the tech simply wasn’t that novel. Rural electrification was definitely promoted but not because what it did was some unknown frontier.

          Similarly electric motors had a lot of competition, even today there’s people buying pneumatic shop tools.

          • dfabulich21 hours ago
            > unless you consider Frankenstein part of the hype cycle

            It absolutely is. Frankenstein is a seminal work of science-fiction horror, and the mysterious power of electricity to change everything is what made it so chilling to its readers in the 19th century.

            > it really doesn’t compare to how much people hyped social media

            The media is considerably different now from in 1818, thanks, in significant part, to the power of electricity. I assure you, when the electrical telegraph came on the scene, people were hyped.

            Of course, much of that hype was on paper printed on printing presses, so it was, in some sense, "incomparable" to the hype possible on cable television, or the hype that's now possible with online social media.

            But if your argument is "Yeah, electricity was kinda hyped, but, you know, not all that hyped, so it proves my point that the more the hype, the less the impact," you have some more research to do. Please just Google "War of the Currents" for a minute.

            • Retric21 hours ago
              > It absolutely is.

              It was published as Fiction. The vast majority of people didn’t think it was anymore realistic than Interstellar etc.

              There’s plenty of stories where we cure cancer, but the 50% improvement in cancer treatments over the last 40 years just doesn’t get much hype because it’s so slow. It’s hard to get excited about the idea cancer may be gone in 200 years because while that will be awesome for people alive then it doesn’t do anything for the people I know.

              > electric telegraph came online people where hyped.

              Objectively it got way more of a meh reaction than you’d think simply based on the timelines involved.

              France was happy to continue using its network of optical telegraphs long after the electrical telegraph became a practical thing. Transatlantic telegraphs got hyped up somewhat, but again the technology took so long from the first serious attempt to a practical working system people understood the limitations inherent to having such limited bandwidth between the contents.

              Obviously new technology gets attention because it’s a net improvement, being able to send messages across the US much faster was useful. But hype is different, it’s focused on second order effects not what it does but what will change. The original iPhone isn’t just another cellphone that also takes pictures, it’s “the internet in your pocket.”

              • jdietrich16 hours ago
                The electrical telegraph was integral to the growth and consolidation of the British Empire. Britain acquired more colonies and held on to them for longer than the other European powers partly due to its naval might, but also due to far superior bureaucratic and communications technology.
                • Retric7 hours ago
                  I think you misunderstood what I was saying.

                  Technology can be quite useful directly and have significant second order effect, hype is about the second order effects being overblown. Second order effects are difficult to predict when something is actually novel, will LLM’s make programming obsolete is harder to answer in 2023 than 2063.

                  Home automation like dishwashers really did meaningfully impact how much effort was needed to keep a home livable, but we didn’t predict the kind of helicopter parenting that happened because of more free time especially after smaller families became common. Thus a great majority of incorrect predictions where just hype.

                  The faster new technology becomes widespread the harder it is to predict those second order effects and thus more hype you see.

                • 8 hours ago
                  undefined
        • socalgal220 hours ago
          You can find similar hype articles about the Palm Pilot, then all the neighsayers who said most people wouldn't want and had no need for computer in their pocket. And yet here we are.
          • qsera14 hours ago
            > then all the neighsayers who said most people wouldn't want and had no need for computer in their pocket

            Mmm..they didn't, at that time.

            That we grew to be dependent on the computer in the pocket does not mean that it was a necessity at any point.

      • unchar121 hours ago
        Calculators are a particularly bad example for your case. There was absolutely hyperbole against calculators when they were introduced. [1]

        With similar sentiment as well "They make us dumb" "Machines doing the thinking for us"

        Cars were definitely seen as a fad. More accurately a worse version of a horse [2]

        If you looked through your other examples, you'd see the same for those as well.

        Some things start as fads, but only time will tell if they gain a place in society. Truthfully it's too early to tell for AI, but the arguments you're making, calling it a fad already don't stand up to reason

        [1]: https://www.newspapers.com/article/the-item/160697182/ [2]: https://www.saturdayeveningpost.com/2017/01/get-horse-americ...

        • qsera14 hours ago
          LLMs will absolutely will have a place. There is no question about it. But it will be doing searching for us, not thinking.

          The flip side to this is that a lot of jobs today that appear to require "thinking" is actually just doing looking up aka "search"..

          • red75prime12 hours ago
            Searching for the optimal solution...
      • 21 hours ago
        undefined
      • mkozlows21 hours ago
        The personal computer, laptops, web browsers, cell phones, smartphones, AJAX/DHTML, digital cameras, SSDs, WiFi, LCD displays, LED lightbulbs. At some point, all of these things were "overhyped" and "didn't live up to the promise." And then they did.
  • seertaaka day ago
    To my mind at least, it is different. I lean heavily on AI for both admin and coding tasks. I just filled out a multipage form to determine my alimony payments in Germany. Gemini was an absolute godsend, helping answer questions in, translate to English, draft explanations, emails requesting time extensions to the Jugendamt case worker.

    This is super scary stuff for an ADHDer like me.

    I have an idea for a programming language based on asymmetric multimethods and whitespace sensitive, Pratt-parsing powered syntax extensibility. Gemini and Claude are going to be instrumental in getting that done in a reasonable amount of time.

    My daily todos are now being handled by NanoClaw.

    These are already real products, it's not mere hype. Simply no comparison to blockchain or NFTs or the other tech mentioned. Is some of the press on AI overly optimistic? Sure.

    But especially for someone who suffers from ADHD (and a lot of debilitating trauma and depression), and can't rely on their (transphobic) family for support -- it's literally the only source of help, however imperfect, which doesn't degrade me for having this affliction. It makes things much less scary and overwhelming, and I honestly don't know where I'd be without it.

    • petterroea20 hours ago
      My empirical experience is that people with ADHD are more vulnerable to get addicted to LLMs due to the feeling of instant gratification. But when PRs take ages and 3 different people are reviewing, you are just making prompting a group effort. If you think meetings are a time waste multiplier you should watch LLM PRs.

      For that reason, and my own experience with AI users being unaware of how bad of a job the LLM is doing (I've had to confront multiple people about their code quality suddenly dropping), if someone says they can rely on LLM I've learned to not trust them.

      When I was younger if I had an idea for a project I would spend time thinking of a cool project name, creating a git repo, and designing an UI for my surely badass project. All easy stuff that gave me the feeling of progress. Then I would immediately lose interest when I realized the actual project idea was harder than that, and quit. This is the vibe I get from LLM use.

      I pray you do not become the next HN user to be screwed over by over-trusting LLM when you have it fill out legal documents for you.

      • seertaak17 hours ago
        [flagged]
        • petterroea16 hours ago
          I have many friends and loved ones with ADHD. It's very common in the IT industry, and probably >50% of people in the hacker spaces I frequent are neurodivergent in some way.

          What I wrote is my empirical experience, but also what friends and loved ones tell me. I have friends with ADHD who have gone through the exact "wow I'm getting a lot done" -> "wow this is actually wasting a lot of time in hindsight" thing I described. If you think others lived experience is degrading to you it may be hitting a sore spot. What if I had ADHD? My friends with ADHD have the same opinion. Would you then say you were degraded by another person with ADHD that were offering their lived experience?

          Maybe we live in very different countries but help has been good for everyone I know who got it. More want it the problem is money. You basically have to be suicidal to get public help, and private costs a fortune. It is a psychologists whole job to use their knowledge to help you self reflect and then act on it. It is uncomfortable, and I can understand why you may experience it as degrading. I don't know about the kind of help you've tried, though.

          I hope you get the help you want.

    • tomluea day ago
      "This time is different" has been correct for every major technological shift in history. Electricity was different. Antibiotics were different. Semiconductors were different.

      Gen AI reached 39% adoption in two years (internet took 5, PCs took 12). Enterprise spend went from $1.7B to $37B since 2023. Hyperscalers are spending $650B this year on AI infra and are supply-constrained, not demand-constrained. There is no technology in history with these curves.

      The real debate isn't whether AI is transformative. It's whether current investment levels are proportionate to the transformation. That's a much harder and more interesting question than reflexively citing a phrase that pattern-matches to past bubbles.

      • bigstrat2003a day ago
        > The real debate isn't whether AI is transformative.

        No, the debate is very much whether AI is transformative. You don't get to smuggle your viewpoint as an assumption as if there was consensus on this point. There isn't consensus at all.

        • selridgea day ago
          No one is smuggling this in. The debate is over. It's transformative. We're in the midst of transformation.
          • bandrami18 hours ago
            It's really not over. Somebody has to actually put something into production with it first.
            • jwittmayer17 hours ago
              Implying that nobody has put AI generated code into production yet?
              • bandrami17 hours ago
                Stuff that's going into production now (actual production, not startup MVP production) would have been being written just before Claude Code came out, so pretty much by definition no. There's some copilot-style assisted stuff in the wild, I guess? But not really more of it than pre-copilot so the productivity argument kind of falls through there.
                • thevinter15 hours ago
                  Cursor came out 3 years ago. "Agentic" refactors have been a thing for 1.5 years. Vibecoding as a term has been created 1 year ago.

                  There are multiple companies that deploy to production daily. What are we even talking about?

                  • bandrami14 hours ago
                    Right but this agentic stuff was supposed to be the wave where we would finally actually see increased output, so we should probably be seeing it soon if it's real. Like, my dev team should definitely have the actual code they keep talking about their agents making, ready for me to put into production. As should my vendors. Any day now.
                    • selridge8 hours ago
                      What is this nonsense?

                      You said that none of this was in production and then when people pointed out that it was obviously in production, you shifted the goal post to some other measure that you just imagined in your head.

                      • bandrami5 hours ago
                        Well, if it's in production, it's not at my company, any of my vendors, or for that matter any of the software I use in my private life; the pace of all of that is exactly what it was 2 years ago. When it shows up I'll form an opinion.
                        • bandrami4 hours ago
                          Let me amend that: one of my vendors has a new diffusion-based noise-reduction plugin that's pretty good though the resource usage is still too high. I imagine that will come down as they improve it. And that's pretty cool. But it didn't come out any faster it's just that it uses diffusion in the plugin itself. But docker was a much bigger impact on the software we use at work than AI has been so far.

                          I was even trying to come up with a list of software I use in my personal life to see if any of that has started coming out faster, and I came up with:

                          KDE

                          Supercollider

                          Puredata

                          Mixxx

                          Renoise

                          CUDA and ROCM

                          none of which have had any kind of release acceleration that I know of (though obviously the hardware to use the last two has gotten mind-blowingly expensive, alas). I use maybe three apps on my phone and they aren't updating any more frequently than they used to.

                          I get that for whatever reason this bugs people, but I'm in a very tech job and have a very tech personal life (just not webdev in either case) and literally have not seen anything I deal with change other than needing to learn to scroll past the AI summary at the top of search results.

                        • selridge4 hours ago
                          What do you expect that it’s gonna announce itself in a modal dialogue when you run the software?

                          This isn’t like AI image generation where you’re going to convince yourself that you can tell the difference based on how you think it looks. Do you really think no one in the production chain of any of the software that you use picked up copilot in the last two years?

                          What signal are you hoping to receive that this is happening?

                          • bandrami4 hours ago
                            Well like I said in the sibling post to this one I'd expect really any of the software vendors in my professional or personal life to release either more rapidly or with a wider array of features than they were a few years ago, and that hasn't been my experience, at all.
                            • asdff3 hours ago
                              The coding was never the slow part.
                              • bandrami3 hours ago
                                I'm certainly sympathetic to that argument, but if you scroll way back this thread started with the question of whether or not AI is transformative, and if it is neither faster nor better that would suggest "no".
            • adaml_62314 hours ago
              I feel like you might only be convinced when an AI powered robot rolls up to you and asks, "Bandrami, are you convinced that AI is transformative yet?"
              • bandrami13 hours ago
                Robots have been able to do that for decades now
            • bitwize16 hours ago
              No, it is over. Compare today to even two years ago.
            • usrnm17 hours ago
              I put AI assisted code in production every day, what are you talking about? At this point I don't even doubt I'm going to lose the job eventually, the question is only whether or not I will be able to pay my mortgage off first.
      • cogman10a day ago
        The problem is in the middle of such a change it's hard to recognize if this is a real change or if this is another Wankel motor.

        Plenty a visual programming language has tried to toot their own horns as being the next transformative change in everything, and they are mostly just obscure DSLs at this point.

        The other issue is nobody knows what the future will actually look like and they'll often be wrong with their predictions. For example, with the rise of robotics, plenty of 1950s scifi thought it was just logical that androids and smart mechanic arms would be developed next year. I mean, you can find cartoons where people envisioned smart hands giving people a clean shave. (Sounds like the making of a scifi horror novel :D Sweeney Todd scifi redux)

        I think AI is here to stay. At very least it seems to have practical value in software development. That won't be erased anytime soon. Claims beyond that, though, need a lot more evidence to support them. Right now it feels like people just shoving AI into 1000 places hoping that they can find an new industry like software dev.

        • johnmaguirea day ago
          > Plenty a visual programming language has tried to toot their own horns as being the next transformative change in everything, and they are mostly just obscure DSLs at this point.

          But how many of your non-nerdy friends were talking about them, let alone using them daily?

        • bojana day ago
          The practical value is there, if they managed to keep the price at the current levels or lower.

          But if they don't and if I have to think twice about how much every request's going to cost, the cost-benefit analysis will look differently fast.

          • cogman10a day ago
            Yeah that's another rub. The current price is basically there in the hopes that in the future they can find revenue streams to maintain their current pace.

            But even if the big companies ultimately go belly up, I think the open models are good enough that we'll likely see pretty cheap AI available for a while, even if it's not as good as the STOA when the bankruptcies roll through.

        • rsynnott14 hours ago
          > Sounds like the making of a scifi horror novel :D

          See ‘Service Model’. YMMV on whether you consider it horror.

        • jibala day ago
          I once owned a Maxda RX2 ... my second car, IIRC. The Wankel motor wasn't revolutionary, but it was pretty good.
      • alpaca128a day ago
        > Gen AI reached 39% adoption in two years (internet took 5, PCs took 12)

        You're comparing a service that mostly costs a free account registration and is harder to avoid than to use, with devices that cost thousands of dollars in the early days.

        • tomlue20 hours ago
          That is a fair point. You could look at enterprise adoption though, also very high, and not cheap at all.
      • thesza day ago

          > 39% adoption in two years (internet took 5, PCs took 12).
        
        Adjust for connectivity and see whether it is different (from pure hype) this time.
      • legulerea day ago
        There's another perspective you can see in the comparison with the dot com boom. The web is here to stay, but a lot of ideas from the beginning didn't work out and a lot of companies turned bankrupt.
        • inigyoua day ago
          The original concept of the web, hyperlinked documents originating from high-quality institutions, is pretty much dead. Now we have an application platform that happens to have adopted some similar protocols and is 99% slop
        • hdgvhicv14 hours ago
          It wasn’t surprise me if a lot of AI companies go bankrupt.

          However some will survive, and there will be far more bankruptcy and downsizing in the industries replaced

      • spidersourisa day ago
        > Gen AI reached 39% adoption in two years

        Source?

        • tomluea day ago
          • Jenssona day ago
            So about 10%, using it less than once per day means you didn't find it useful for most tasks.
            • hdgvhicv14 hours ago
              Just like the PC. Or the internet.

              In 1995 how many people used the internet in their daily work, of those that did how many was it a curiosity that maybe supplemented their existing business practice (sending a memo via email rather than post for example). Large companies were using large computer mainframes but the majority of employers - the SMEs - weren’t.

              By 2005 it massively shifted, and AI seems to be coming faster than the internet and computers in general.

              By 2015 non intenet companies were going the way of the dodo. How many travel agents were there per 100k in 1995 compared to 2015?

              • shimman4 hours ago
                My boss never had to threaten me to use a computer, unlike the current LLM mandates across corporate America.
            • shimman4 hours ago
              Also add in that these adoption rates are being enforced via threats of firing by bosses of workers. It's hardly something organic, there's a reason why the LLM companies are chasing lucrative corporate welfare contracts because consumers have soundly rejected this nonsense.
        • 201984a day ago
          Yeah, what's counting as "adoption" here?
      • fragmedea day ago
        The four technologies I look at are 3D televisions, VR, tablets, and the electric car. 3D televisions and VR have yet to find their moment. Judging tablets by the Apple Newton and electric cars by the EV1, this time is different turns out to be the correct model looking at the iPad and Tesla, but not for 3d televisions or VR (yet). So, it could be, but my time machine is as good as yours (mine goes 1 minute per minute, and only forwards, reverse is broken right now.), so unless you've got money on it, we'll just have to wait and see where it goes.
    • artemonstera day ago
      Can you elaborate your choice about asymmetric multimethods? I also tinker with my PL and wanted to hear your reasonings and ideas
      • seertaaka day ago
        Sure! First, here are references, in case you want to deep dive:

        1. http://lucacardelli.name/Papers/Binary.pdf

        2. https://www.researchgate.net/publication/221321423_Parasitic...

        Second, asymmetric multimethods give something up: symmetry is a desirable property -- it's more faithful to mathematics, for instance. There's a priori no reason to preference the first argument over the second.

        So why do I think they are promising?

        1. You're not giving up that much. These are still real multimethods. The papers above show how these can still easily express things like multiplication of a band diagonal matrix with a sparse matrix. The first paper (which focuses purely on binary operators) points out it can handle set membership for arbitrary elements and sets.

        2. Fidelity to mathematics is a fine thing, but it behooves us to remember we are designing a programming language. Programmers are already familiar with the notion that the receiver is special -- we even have a nice notation, UFCS, which makes this idea clear. (My language will certainly have UFCS.) So you're not asking the programmer to make a big conceptual leap to understand the mechanics of asymmetric multimethods.

        3. The type checking of asymmetric multimethods is vastly simpler than symmetric multimethods. Your algorithm is essentially a sort among the various candidate multimethod instances. For symmetric multimethods, choosing which candidate multimethod "wins" requires PhD level techniques, and the algorithms can explode exponentially with the arity of the function. Not so with asymmetric multimethods: a "winner" can be determined argument by argument, from left to right. It's literally a lexicographical sort, with each step being totally trivial -- which multimethod has a more specific argument at that position (having eliminated all the candidates given the prior argument position). So type checking now has two desirable properties. First, it design principle espoused by Bjarne Stroustroup (my personal language designer "hero"): the compiler implementation should use well-known, straightforward techniques. (This is listed as a reason for choosing a nominal type system in Design And Evolution of C++ -- an excellent and depressing book to read. [Because anything you thought of, Bjarne already thought of in the 80s and 90s.]) Second, this algorithm has no polynomial or exponential explosion: it's fast as hell.

        4. Aside from being faster and easier to implement, the asymmetry also "settles" ambiguities which would exist if you adopted symmetric multimethods. This is a real problem in languages, like Julia, with symmetric multimethods. The implementers of that language resort to heuristics, both to avoid undesired ambiguities, and explosions in compile times. I anticipate that library implementers will be able to leverage this facility for disambiguation, in a manner similar to (but not quite the same) as C++ distinguishes between forward and random access iterators using empty marker types as the last argument. So while technically being a disadvantage, I think it will actually be a useful device -- precisely because the type checking mechanism is so predictable.

        5. This predictability also makes the job of the programmer easier: they can form an intuition of which candidate method will be selected much more readily in the case of asymmetric multimethods than symmetric ones. You already know the trick the compiler is using: it's just double-dispatch, the trick used for "hit tests" of shapes against each other. Only here, it can be extended to more than two arguments, and of course, the compiler writes the overloads for you. (And it won't actually write overloads, it will do what I said above: form a lexicographical sort over the set of multimethods, and lower this into a set of tables which can be traversed dynamically, or when the types are concrete, the compiler can leverage monomorphize -- the series of "if arg1 extends Tk" etc. is done in the compiler instead of at runtime. (But it's the same data structure.)

        6. It's basically impossible to do separate compilation using symmetric multimethods. With asymmetric multimethods, it's trivial. To form an intuition, simply remember that double-dispatch can easily be done using separate compilation. Separate compilation is mentioned as a feature in both the cited papers. This is, in my view, a huge advantage. I admit, this I haven't quite figured out generics will fit into this -- at least if you follow C++'s approach, you'll have to give up some aspects of separate compilation. My bet is that this won't matter so much; the type checking ought to be so much faster that even when a template needs to be instantiated at a callsite, the faster and simpler algorithm will mean the user experience will still be very good -- certainly faster than C++ (which uses a symmetric algorithm for type checking of function overloads).

        To go a bit more into my "vision" -- the papers were written during a time when object-orientation was the dominant paradigm. I'd like to relax this somewhat: instead of classes, there will only be structs. And there won't be instance methods, everything will be a multimethods. So instead of the multimethods being "encapsulated" in their classes, they'll be encapsulated in the module in which they're defined. I'll adopt the Python approach where everything is public, so you need to worry about accessibility. Together with UFCS, this means there is no "privileging" of the writer of a library. It's not like in C++ or Java, where only the writer of the library can leverage the succinct dot notation to access frequently used methods. An extension can import a library, write a multimethod providing new functionality, and that can be used -- using the exact same notation as the methods of the library itself. (I always sigh when I read languages, having made the mistake of distinguishing between free functions and instance methods, "fix" the problem that you can only extend a library from the outside using free functions -- which have a less convenient syntax -- by adding yet another type of function, an "extension function. In my language, there are only structs and functions -- it has the same simplicity as Zig and C in this sense, only my functions are multimethods.)

        Together with my ideas for how the parser will work, I think this language will offer -- much like Julia -- attractive opportunities to extend libraries -- and compose libraries that weren't designed to work "together".

        And yeah, Claude Code and Gemini are going to implement it. Probably in Python first, just for initial testing, and then they'll port it to C++ (or possibly self-host).

    • camillomillera day ago
      This comment is scary. You don’t control these technologies, you are growing dependent on stilts that could disappear any moment.
      • nicboua day ago
        What if they’re just good for a while and then you go back to the old way?
      • unchar121 hours ago
        The good thing is that local models are catching up very fast.
      • seertaak17 hours ago
        I'd be remiss to point out we went from "LLMs are vaporware" to "people are becoming slaves to their LLMs" awful quick.

        > [I'm scared] you are growing dependent on stilts that could disappear any moment.

        First, I do control the RTX3070 I own, and that can actually do a pretty decent job nowadays with some of the 3B parameter models.

        Second, maybe if people like you showed as much concern for the fact that LGBT people can expect family violence as you do for Dr. Strangelove scenarios, then people like me wouldn't have to lean on LLMs so heavily.

        Third, it's hilarious that your response to a comment pointing out how difficult it was to get help from another human without being degraded, was to degrade me by calling me an LLM junkie. Maybe you should be worried that Gemini appears to have more capacity for empathy and self-awareness than you.

        Fourth, given that you show absolutely zero concern or willingness to help when it comes to the difficulties faced by LGBT people or ADHDers, my advice to you is to take your fears and shove them up your ass.

        • hdgvhicv14 hours ago
          Holy pivoting Batman!
        • Imustaskforhelp13 hours ago
          > First, I do control the RTX3070 I own, and that can actually do a pretty decent job nowadays with some of the 3B parameter models.

          Quick question but what model are you exactly running with 3B parameter. The only decent model I can find which can compete sort of with Cloud models without costing a bank in GPU/RAM are the recently launched Qwen models (35A3B or 27B) which were released a week ago

          > First, I do control the RTX3070 I own, and that can actually do a pretty decent job nowadays with some of the 3B parameter models.

          My larger question to you is that even if it might not disappear in any moment, the fact of the matter still remains as if that its still a dependency. Is this dependency worth it? This is an open question and something I am still thinking.

          > Third, it's hilarious that your response to a comment pointing out how difficult it was to get help from another human without being degraded, was to degrade me by calling me an LLM junkie. Maybe you should be worried that Gemini appears to have more capacity for empathy and self-awareness than you.

          Gemini isn't real tho. It's still linear algebra with no regards to what it says or not. It's just trained on all the corpus data that Google can find and fine tuned to mimic it. By attaching real human qualities to Gemini, we dilute the value of those human qualities in those first place.

          I don't necessarily know how "Humans" have treated you. They have treated me both good and bad but I am always more greatful to those who taught me or discussed with me things and helped me know something new. I very much feel like the same fine-tuning that I discussed earlier about models make those very agreeable and the chances of growth are rather limited.

          > Fourth, given that you show absolutely zero concern or willingness to help when it comes to the difficulties faced by LGBT people or ADHDers, my advice to you is to take your fears and shove them up your ass.

          Actually, You are a human as well so try to think it like this, I am sure you must've met both good and bad people and observed a few common characteristics of them. You are a human too and each second gives you a choice which can help you get either good or bad characteristics being better/worse each day.

          Now my philosophy is to be good if not for yourself, then for others in the sense that you become the person that you wished could help you in your life and you can use that to actually help other people. This might be a little naive and practical nature sometimes might not follow this philosophy but yea.

          So I want for you to reflect on what you wrote and think as if perhaps that might be a little too aggressive? and if that's what you want or not.

          My or (our?) worry is that it feels like too big of a dependence on LLM which are fundamentally black boxes (yes, they are!), Humans can be bad but humans can be good too, I suggest even though it can be hard to have a good friend group (even if online) and talk with them about normal life issues.

          Regarding, Coding, I would consider that there are some great people here on forums or Github or just about anywhere who are kind as well and can be helpful. Stackoverflow as an example had issues because of moderation problems which led to the community being hostile but to say that the whole of Software Engineering is such way might be wrong.

          Speaking from personal experience, I may or may not have ADHD, I haven't diagnosed it yet but I definitely went into the AI=Producitivty rabbit hole especially more because I am a teen and I was in 9th/10th grade when ChatGPT came iirc. I knew basic python and knew the concepts of multiple languages and chatgpt felt hella addicting to be making websites in svelte all of a sudden where I can make one color button turn to another.

          I wouldn't be lying if I say that I may not have learnt Coding effectively the way it was designed from its origin until quite recently. I was Vibe coding from the start and I have made quite some projects at the very least.

          My observation is that its great for prototyping purposes but even after finally creating prototypes of most if not all the project ideas I ever had. I lost the motivation to continue and felt burn out. I did everything that I ever wanted to and made every project I thought yet the projects still felt hollow.

          So, nowadays I am trying to focus more on studies for my college which can also act as a sort of recovery, to me it was also the fact that I was making these projects when I should've been studying in hindsight haha but I always just wanted to "prove" something (Yes I struggle with studies quite often but I wish to improve and I hope I can improve since I know from past that I can study often but its rather that I need my pure undirected focus on it which became hard for some time)

          Recently, I went into a marriage of my own cousin. I found that to be much more fulfilling experience than expected. There is something about human experience both good or bad which can't be quantified.

          I don't know what the future holds for me or you. But I wish you luck and hope this message helps ya. I personally realize that aside from prototyping which may be less meaningful than I previously thought at times, AI to me feels quite weak.

          I think that for any product to really win, you might need true conviction in the product itself and at that point, the point of prototyping with AI or writing the code with AI to me becomes moot/redundant whereas AI is causing ram prices/storage to increase which is putting genuine projects out of luck as well. [This is one of the worst times to open a Cloud/VPS provider shop]

          Perhaps I can understand AI use to get Open source tool when there were none or something but that to me seems like a cultural issue where Open source isn't funded so people are more likely to have it closed source to survive their likelihood but even that to me feels very moot point as there are some great open source projects as well who would appreciate each and every dollar that you donate to them, perhaps more so than a 200$ subscription of claude code as well which you might have to create the alternative to those in the first place as well.

          My point still feels to me that it still feels hollow, I think you can find one of my other comments some days ago where I talk about this feeling of hollowness about AI projects as well which I can't help but feel relevant so many times. I am curious as to what you might think.

          Have a nice day.

  • parliament32a day ago
    When I look at LLMs as an interface, I'm reminded of back when speech-to-text first became mainstream. So many promises about how this is the interface for how we'll talk to computers forevermore.

    Here we are a few decades later, and we don't see business units using Word's built-in dictation feature to write documents, right? Funny how that tech seems to have barely improved in all that time. And, despite dictation being far faster than typing, it's not used all that often because.. the error rate is still too high for it to be useful, because errors in speech-to-text are fundamentally an unsolvable problem (you can only get so far with background noise filtering and accounting for accents etc).

    I see the parallel in how LLM hallucinations are fundamentally an unsolvable component of transformers-based models, and I suspect LLM usage in 20 years will be around the level of speech-to-text today: ubiquitously in the background, you use it here and there to set a timer or talk to a device, but ultimately not useful for any serious work.

    • prescriptivist21 hours ago
      This is a funny point that you're making (for me, anyway), because prior to early December, probably 5% of the lines of code I wrote in a week were AI-generated by cursor. Then I started using Claude Code. Fast forward to today, I would say 98% of the code that I've shipped in the last three weeks has been written completely by Claude Code.

      Prior to three weeks ago, I had used speech-to-text to do accomplish approximately 0% of the work I've done in my 20 years of coding. In the last three weeks, well over half of the direction that I've given to Claude Code has been done with speech-to-text.

      • plomme13 hours ago
        How are you doing speech-to-text with Claude Code?
        • prescriptivist8 hours ago
          Just Wispr Flow and a PTT key binding. It's very good for doing plans with Claude Code because I can just ramble and ramble. As long as I just convey the details of what I want over a sufficiently long string of text, it will work even if it has errors in speech-to-text or I have slight contradictions in my framing of the prompt.

          If I need to explicitly reference files in the plan prompt, I just manually annotate them into the prompt at the end.

      • inigyou15 hours ago
        What does the code do?
        • William_BB14 hours ago
          CRUD
          • shimman4 hours ago
            It's always something that already exists but requires 100x the code.
    • dweinusa day ago
      I think there is a second reason people still type, and it's relevant to LLMs. Typing forces you to slow down and choose your words. When you want to edit, you are already typing, so it doesn't break the flow. In short, it has a fit to the work that speech-to-text doesn't.

      LLMs create a new workflow wherever they are employed. Even if capable, that is not always a more desirable/efficient experience.

    • sadeshmukha day ago
      I type faster than I think, and being able to edit gives the edge over text to speech. I don't believe this is a fundamentally comparable analogy.
    • SchemaLoada day ago
      I'd say speech to text is unsolvable for a more fundamental reason that it's hard to actually speak out an entire document flawlessly in one take.

      Spoken language is very different to written language, which is why for example you can easily tell when an article is transcribing a spoken interview.

      • asdff3 hours ago
        Even today seems like speech to texts works like it did 25 years ago where its breaking up sentences into individual words and trying to match the individual words. So you might get these stupid nonsense sentences from similar sounding words. It isn't like an old school human transcriber where they might miss words on the recording but they can fill in the blanks using their own knowledge of the language or how the speaker talks.
      • jamilton20 hours ago
        Yes, it's a UX thing. You'd still have to edit it by typing afterwards as well.

        Similarly, raw LLM/chat interfaces are usually not the best option.

    • The completely different way people are experiencing AI is fascinating.

      In my world AI is already far more influential than text to speech.

      People on here act like we don’t know if AI will be useful. And I’m sitting over here puzzled because of how fucking useful it is.

      Very strange.

      • prescriptivist20 hours ago
        > People on here act like we don’t know if AI will be useful. And I’m sitting over here puzzled because of how fucking useful it is.

        Yes, it's very strange to read AI threads here because the general tone is so different than, say, at the company I work at, where hundreds of engineers are given enormous monthly token budgets and are being pushed to have the LLMs write as much code as possible. They're not forced to, and no one is reprimanded for not adopting Claude Code or Codex or Cursor. But there's been a strong tonal shift in technology leadership in the last month that basically implies that this is how it is going to be done in the future whether one likes it or not.

        As for me, I've been writing all of my code via Claude for a while now, and I don't think I will ever go back to working in an editor writing code the way I did for most of my career. Nor do I want to.

    • bigstrat2003a day ago
      Yeah this is exactly my view. We've had several years of work on the tech, and LLMs are just as prone to randomly spitting out garbage as they were the first day. They are not a tool which is fit for any serious work, because you need to be able to rely on your tools. A tool which is sometimes good and sometimes bad is worse than having no tool at all.
      • selridgea day ago
        Did google not rely on Gemini to do their ISA changeover?

        https://arxiv.org/abs/2510.14928

        Was Gemini worse than no tool at all there?

        • parliament32a day ago
          Probably. According to the paper, 83.82% of automated commits were already made by algorithmic tools (non-LLM). For the remainder, a three-phase LLM approach was tried, and achieved a success rate of 30%. Based on these numbers, it probably would have been faster, cheaper, and more efficient to just enhance their current strategy rather than screwing around with text generators.
          • selridge8 hours ago
            I think that’s a bad faith read on that paper.
      • johnfna day ago
        Do you really think that Opus 4.6 hallucinates to exactly the same degree as GPT-3.5? I am mystified how you can hold this perspective.
        • parliament32a day ago
          If you're not seeing the hallucinations, I'd assert you're either not using it enough, or (more likely) you don't have enough knowledge in the subject matter to notice when it's hallucinating.
          • johnfna day ago
            I'm not interested in getting into some argument about who has "more knowledge in the subject matter". I'm genuinely curious: do you think Opus 4.6 hallucinates just as much as GPT-3.5?
            • segfaultex20 hours ago
              Yes. I see it hallucinate method names for 3rd party libraries constantly.

              It’s useful, but when users here say they’re vibe coding 98% of their work, I have to think they’re not working on anything complex.

              • andoando19 hours ago
                Hmm no way. Ive used to see hallucinations like 50% of the time prompting gpt3.5 for simple functions.

                I don't remember the last time ive seen a made up library/methods these days and Im definitely using way more for more complex stuff. The tool calling changed the game.

                Even for work I do almost 100% of my coding telling claude what to do. I mean I break down the tasks and tell it more or less exactly what I want but I find "rename this thing across these two repos" easier than doing it myself

              • hdgvhicv14 hours ago
                I ran into the non existent methods and functions far more a year ago than I do today. I hadn’t even considered it as I don’t write a lot of code, Most of my job is talking with people to understand the problems and to drive strategy.
          • Orygin12 hours ago
            What a condescending post. You either haven't used any recent models if you make that statement. Anyone who used GPT3.5 and any other newer model know that hallucinations have gone down tremendously.

            Of course it's not perfect, here and there are inaccuracies or plain hallucinations, but it's impossible to state that it's still the same garbage it was 3 years ago.

          • selridgea day ago
            LMFAO does it hallucinate to the same degree as GPT 3?

            Which is what was questioned.

    • johnfna day ago
      I'm curious about the statement that hallucinations are "fundamentally unsolvable". I don't think an AI agent has left a hallucination in my code - by which I mean a reference to something which doesn't exist at all - in many months. I have had great luck driving hallucinations to effectively 0% by using a language with static typechecking, telling LLMs to iterate on type errors until there are none left, and of course having a robust unit and e2e test suite. I mean, sure, I run into other problems -- it does make logic errors at some rate, but those I would hardly categorize the same as hallucinations.
      • alpaca128a day ago
        So type errors are not hallucinations in your book, but "a reference to something which doesn't exist at all" is?

        In the context of AI most people I know tend to mean wrong output, not just hallucinations in the literal sense of the word or things you cannot catch in an automated way.

        • johnfn21 hours ago
          My statement is that if your only hallucinations are type errors, that can be solved by simply wrapping the LLM in a harness that says "Please continue working until `yarn run tsc` is clean". Yes, the LLM still hallucinates, but it doesn't affect me, because by the time I see the code, the hallucinations are gone.

          This is something I do every day; to be quite honest, it's a fairly mundane use of AI and I don't understand why it's controversial. To give context, I've probably generated somewhere on the order of 100k loc of AI generated code and I can't remember the last time I have seen a hallucination.

          • parliament3221 hours ago
            Well of course it'll eventually work, just a random text generator will eventually produce code that passes your tests if you run it hard enough.

            The problem is it's devouring your tokens as it does so. While you're on a subsidized plan that seems like a non-issue, but once the providers start charging you actual costs for usage.. yeah, the hallucinations will be a showstopper for you.

            • johnfn21 hours ago
              If you can point me to any "random text generator" that scores a 76.8 on SWEBench, after any number of iterations, or in fact is competitive on any benchmark at all, I'll happily switch to it. Until then, I don't think that analogy will lead to particularly productive conversation. There are many engineers using a similar harness on LLMs today. No one uses a random text generator to generate code because you are not making a real suggestion.

              > The problem is it's devouring your tokens as it does so. While you're on a subsidized plan that seems like a non-issue, but once the providers start charging you actual costs for usage.. yeah, the hallucinations will be a showstopper for you.

              The discussion, and your original post, was about whether hallucinations are a meaningful issue today - not in some hypothetical future.

      • bojana day ago
        Maybe you're lucky. I had Opus 4.6 hallucinate a non-existing configuration key in a well known framework literally a few hours ago.

        Granted, it fixed the problem in the very next prompt.

        • johnfna day ago
          Couldn’t that problem be solved with static typechecking?
          • bojan17 hours ago
            In a yaml file? I don't think so.
      • bogzza day ago
        ChatGPT 5.2 kept gaslighting me yesterday telling me that LLMs were explainable with Shapley values, and it kept referencing papers which talk about LLMs, and about SHAP, but talk about LLMs being used to explain the SHAP values of other ML models.

        I encounter stuff like this every week, I don't know how you don't. I suppose a well-structured codebase in a statically typed language might not provide as much of a surface for hallucinations to present themselves? But like you say, logical problems of course still occur.

        • johnfna day ago
          I mean to say that code generation never hallucinates. I suppose that was unclear.
        • a day ago
          undefined
      • gambitinga day ago
        >> I don't think an AI agent has left a hallucination in my code

        I literally just went on Gemini, latest and best model and asked it "hey can you give me the best prices for 12TB hard drives available with the British retailer CeX?" and it went "sure, I just checked their live stock and here they are:". Every single one was made up. I pointed it out, it said sorry, I just checked again, here they are, definitely 100% correct now. Again, all of them were made up. This repeated a few times, I accused it of lying, then it went "you're right, I don't actually have the ability to check, so I just used products and values closest to what they should have in stock".

        So yeah, hallucinations are still very much there and still very much feeding people garbage.

        Not to mention I'm a part of multiple FB groups for car enthusiasts and the amount of AI misinformation that we have to correct daily is just staggering. I'm not talking political stuff - just people copy pasting responses from AI which confidently state that feature X exists or works in a certain way, where in reality it has never existed at all.

        • johnfna day ago
          My comment was about code, not fact checking - that’s why I said they were a solved problem provided you use static typechecking and tests.
  • thomassmith65a day ago
    The hype around AI is admittedly annoying - especially from the Wall St crowd who don't know how to pronounce 'Nvidia' correctly, and who haven't managed to internalize the fact that the chatbots they use hallucinate.

    It really is 'different', though, in the same way the Internet was.

    It took about 20 years (ie: since The World ISP) for the Internet to work its way into every facet of life. And the dot com bubble popped half-way through that period of time.

    AI might 'underwhelm' for another five or ten years. And then it won't. Whether that's good or bad, I don't know.

    • MaybiusStripa day ago
      The only people underwhelmed by AI in February 2026 are people who have formed an identity around being AI skeptics over the last couple years and are struggling to shed it. I haven't met anyone who has seriously used the new models who isn't a at least a bit awed and disturbed.
      • thomassmith65a day ago
        That's very true in terms of how capable these chatbots clearly are, but I believe the author was using 'underwhelming' to refer to the societal impact.

        So far, life goes on roughly the same as it did five years ago. This can feel 'underwhelming' in contrast to the onslaught of public discussion about, and huge investments in, AI.

        Most of us here on HN are programmers, and we all know how radically LLMs have changed our code projects. Even so, the change to our everyday lives (aside from our work or hobby project) is not, just yet, glaringly obvious. This year, it's mainly... every website shoves an AI box on its site that nobody seems to want!

        • bojana day ago
          There is also that contrast about it being genuinely useful for work/programming and the fact that, for now, it changes the rest of my life in a negative way - by making PC hardware unavailable, by hearing every day I'll be out of work in 6-24 months, and by having to deal with people taking the information from Chat for granted.
        • Gud8 hours ago
          lol “chatbots”.

          I’m using these chatbots to produce advanced software. Chatbots, get real

          • thomassmith657 hours ago
            Is this a debate over who is the harder-core developer? That's of interest to nobody. Probably not even us.
      • sumanthvepaa day ago
        Not true. I'm a really heavy user of AI. And it's improved my productivity dramatically as a developer, but it doesn't work in every situation even in programming. I see it as an indispensible tool, but its not, right now, a tool that will replace me as a programmer or product manager or salesperson, or marketer. or (in my case) an owner and investor.

        Will that happen in the future, maybe. but I don't have enough insight into how AI is evolving in the labs to make a judgement on that.

      • skeeter2020a day ago
        This statment is really annoying and getting boring. There are A LOT of us who have built careers evaluating technology with healthy skepticism, finding where it works and were it doesn't, excited to share & learn - and we've heard "this time it's different" many times. Now because we refuse to jump in without that same nuance and thought, and proclaim "everything's different over night!" we're branded as ludites when we're really trying find a balance.

        I don't hear people saying "nothing is going to change", but I do hear questions about the timeline and if the current levels of investment match returns. Branding these people as stuck in some sort of negative identity is bullshit.

        • shruggedatlasa day ago
          What is your position on AI?
          • skeeter2020a day ago
            in a nutshell: AI - even if transformative and in the future a widely used general-purpose technology - is normal technology. I reject the technological determinism that is being fed to us, especially the idea that AI itself is an agent in defining its own future. I think adoption and the post-adoption spread will be slow & uncertain (relative to the current messaging) regardless of where it ultimately takes us. I think the absolute societal impact is grossly overstated, and the roles of institutions shaping the path underestimated or ignored.
            • danny_codes21 hours ago
              > especially the idea that AI itself is an agent in defining its own future

              Why? I see no evidence that this won’t be the case.. or isn’t already

      • emp17344a day ago
        You’re creating a false dichotomy to alienate perceived opponents. Frankly, it’s really annoying and close-minded, and you haven’t contributed anything to the conversation.
      • hdgvhicv14 hours ago
        What disturbs me is the speed of improvement, moreso then the capability.

        Maybe it will plateau in the next 6-24 months, in which case it will “only” be as disruptive as the computer or industrial revolutions, albeit at a faster pace.

        If not, I don’t think anyone can predict.

      • a day ago
        undefined
      • jduba day ago
        You're likely to find more nuance in opposing views than your "underwhelmed by AI" generalisation could represent.
      • selridgea day ago
        [flagged]
    • mortenjorcka day ago
      "AI is a bubble!"

      "AI will change everything!"

      Few seem to understand that both of the above can be true. The parallel you draw to the internet revolution is apt; dot-coms were both a bubble and changed everything.

      • almostherea day ago
        it literally describes the gartnerhype cycle. this article is pointless, the only thing that matters is what survives it with over 1m users. AI will have billions of users when GHC is on the back end.
    • naravaraa day ago
      I think a good analogy will be the way word processors changed printing. Suddenly anyone with access to a computer had the ability to do professional level editing and layout. Most of them didn’t have the taste or skills to use the tools to the fullest, but it still opened up a ton of possibilities that weren’t available before because it was never practical to hire an actual professional to do a poster for a dinky church bake sale before. But now, church bake sales can have pretty slick looking posters (and websites) depending on whether any of the volunteers cares enough to get.

      The stuff LLMs will democratize will be a lot more impactful than nice posters for car wash fundraisers though. So in that sense it will be different, but I don’t think it will crack the market for proficient experts in the field in the same way photoshop didn’t destroy graphic design and CAD didn’t destroy drafting. It may get rid of the market for a lot of the second-tier bootcamp grad talent though, so I wouldn’t be getting into that right now if I could help it.

      • doyougnua day ago
        I think this is exactly right. I've been thinking of "this time" as similar to the advent of digital spreadsheets. Spreadsheets existed for thousands of years but spreadsheet programs transformed spreadsheet work that took hours or weeks into seconds. You still had to know what you were doing, and if you knew what you were doing you were easily 10x faster than those that didn't.

        I think we are in a similar situation with code generation now, then only difference in my mind is that LLMs come with a massive platform risk. Who's to say that one day anthropic decides my company is too much of a competitor to use their tool (like they've already done with openai) or what if they decide that instead of pulling their product from my use they just make it generate worse code, or even insert malicious payloads. A dependence on these tools is wildly more risky than dependency on a word processor or a spreadsheet program. It reminds me of the arguments around net neutrality and I cannot fathom how people building on top of, and with, these tools do not see the mountain of risks around them.

        • hdgvhicv14 hours ago
          We have a generation of computer programmers who have known nothing but building on top of AWS. Vendor lockin at a career level. Most were building on top of Microsoft before that. Platform agnosticism and open source and specifically the ownership and control was mostly niche.

          I don’t see that changing.

    • What world are you living in where AI is underwhelming currently? I can’t even comprehend this. Are you just not using it or something?
      • thomassmith6513 hours ago
        There aren't many situations today that make one think:

          Why doesn't this business have a website?
        
          Why is there no wifi here?
        
          Why do they send these forms in the mail instead of email them?
        
          Why can't I talk to this gadget with bluetooth?
        
          Why can't I file this form electronically?
        
          Why is there no electronic version of this book?
        
        That was not the case, prior to the 2010's. There was the promise of new technology, but the reality was underwhelming.

        With AI, we're still in 1998 or 1999. People like yourself, and most people on HN, see the promise, and benefit from what AI can already do. Still, AI has yet to benefit the average person much, if at all.

  • prplan hour ago
    It is different this time.

    In certain professions it wasn't uncommon to spend $3k/year or more (in 2026 dollars) on software licenses - Adobe CS4/CS6 etc... with a handful of products easily pushed over that. In other professions. All sorts of other jobs require people to pay for their own tools as well.

    What I get for $150/month I'd easily pay twice or more for that if I had to, even out of pocket if I had to for current functionality - even if was frozen in time. I'd imagine many, if not most, readers on hacker news would do the same. Multiplied across the entire population of software developers (and broader population using AI) - I think it's clear to see what AI is worth in a grounded way.

  • p-oa day ago
    By the looks of it, 2026 might be the year where reality and fiction will finally collide with AI and we'll be able to see if all the hype was warranted.

    But like all the previous hype, most of the people that were the loudest won't say they were wrong, and they'll move to the next thing, pretending like they never were the one that portrayed AI as the holy Graal.

    • belocha day ago
      There are all sorts of algorithms in use that were once thought of as AI, but transitioned to being mere algorithms well before they entered public awareness, if they did that at all. Some are still useful and used everywhere, but they have never been thought of as AI by the public. For them, AI is a term that has long been reserved for some far-off, sci-fi future.

      LLM's are not artificial general intelligence (i.e. not sci-fi AI). Why haven't they transitioned to being mere algorithms by now? Why is the public being told AI is finally arriving when it's really just another algorithm?

      We have some truly slick and shady corporations involved in the bubble right now and they're marketing LLM's like tobacco. LLM's have been pushed out, at immense cost, to the public in a way that makes them more directly accessible to average people than any past algorithm. Young children can ask a LLM to do their homework for them. Middle managers can ask a LLM to create a (shitty) ad campaign for them. Corporations have gone to tremendous expense to make that widely available and, for the moment, mostly free. They seem to be following the Joe Camel school of marketing. Get them hooked while they're young so they come to you first when they're older! The only difference is that nobody is stepping in to stop the new Joe Camel from handing out free samples to kids.

      Then there's the "go big" aspects of the bubble. The major competitors are trying to out-spend each other to dominance, but the suckers are so colossally big that their bubble is affecting global GPU, memory, and storage prices. This bubble is going to stress power grids wherever it operates and do considerable environmental harm. The financial games being played behind the bubble are absolutely stupid. The results, so far, are tantalizing for billionaires. LLM's offer the promise of being able to fire all their pesky and annoying human workers. It won't deliver on that, and none of these companies is ever going to make enough to pay their debts. There might be "too big to fail" government bailouts, but there are going to be some big bankruptcies too.

      Useful algorithms will come out of all this, a lot of tears too, but not "AI".

      • donkeybeer15 hours ago
        Are you saying "algorithm" as a funny way to say things deemed "dumb" or algorithm as in any turing system?

        Do you think ai can never even conceptually become equivalent to a human or merely that the current crop is not there yet?

    • NitpickLawyera day ago
      > and we'll be able to see if all the hype was warranted.

      Umm, what? For the past 3 years, every year I've said something along the lines of "even if models stop improving now, we'll be working on this for years, finding new ways to use it and make cool stuff happen". The hype is already warranted. To have used these tools and not be hyped is simply denial at this point.

      • p-oa day ago
        Maybe AI is useful to you, but the US economy is currently buoyed by promises of AI replacing the workforce across the board.

        Most of Mag-7 are planning to spend over 500B on capex this year alone on building out datacenters for AI pipelines that have yet to prove that it can generate a sustainable profit. Yes, AI is useful in some environments, but the current pricing is heavily subsidized. So my point stand, the hype is not warranted.

        • wiseowisea day ago
          > but the US economy is currently buoyed by promises of AI replacing the workforce across the board.

          Still don't understand what's the end goal here. Assuming they don't deliver, then there are billions of investments that will go bust. Assuming they deliver, millions lose their jobs and there's going to be a bloodbath on the streets.

          • anthonypasqa day ago
            the end goal is productivity growth, aka the point of nearly every technology ever invented. The human story is about how we learn to do more with less.
          • inigyoua day ago
            There isn't a unifying end goal, each individual actor has the goal of getting more money.
          • georgemcbaya day ago
            > Assuming they don't deliver, then there are billions of investments that will go bust. Assuming they deliver, millions lose their jobs and there's going to be a bloodbath on the streets.

            There is a third outcome that combines both of these.

            LLMs can massively displace the workforce (and cause widespread social instability) AND the companies pouring hundreds of billions into them right now could, at the same time, fail to capture significant amounts of the labor savings value as late-mover alternatives run the race drafting their progress without the massive spend.

            I'd honestly be surprised if this double-whammy isn't the outcome at this point. AI is going to have a massive impact on everything, but there is still no moat in sight.

        • athrowaway3za day ago
          Leaving aside the economic shitshow and other things.

          I think you're right but for the wrong reasons wrt sustainable profit.

          Specifically, overcounting how much it will cost in 5 years to run AI because you're extrapolating current high prices, and at the same time undercounting how the demand will drive efficiency gains.

        • jerfa day ago
          I think our little corner of the world has a distorted view of AI in that it is actually proving useful for us. Once they passed a certain level of usefulness... I remember when they were still struggling just to output syntactically correct code, you know, like, 18 months ago or so... they became a useful tool that we can incorporate.

          But there's a lot of things playing out to our advantage. Vast swathes of useful and publicly available training data. The rigorous precision of said data. Vast swathes of data we can feed it as input to our queries from our own codebases. While we never attained the perfect ideal we dreamed of, we have vast quantities of documentation at differing levels of abstraction that the training can compare to the code bases. We've already been arguing in our community about how design patterns were just level of abstraction our coding couldn't capture and AI has access now to all sorts of design patterns we wouldn't have even called design patterns because they still take lots of code to produce, but now for example, if I have a process that I need to parallelize it can pretty much just do it in any of several ways depending on what I need at that point.

          It is easy to get too overexcited about what it can do and I suspect we're going to see an absolute flood of "We let AI into our code base and it has absolutely shredded it and now even the most expensive AI can't do anything with it anymore" in, oh, 3 to 6 months. Not that everyone is going to have that experience, but I think we're going to see it. Right now we're still at the phase where people call you crazy for that and insist it must have been you using the tool wrong. But it is clearly an amazing tool for all sorts of uses.

          Nevertheless, despite my own experiences, I persist in believing there is an AI bubble, because while AI may replace vast swathes of the work force in 5-20 years, for quite a lot of the workforce, it is not ready to do it right this very instant like the pricing on Wall Street is assuming. They don't have gigabytes of high-quality training data to pour in to their system. They don't have rigorous syntax rules to incorporate into the training data. They don't have any equivalent of being guided by tests to keep things on the rails. They don't have large piles of professionally developed documentation that can be cross-checked directly against the implementation. It's going to be a slower, longer process. As with the dot-com bubble, it isn't that it isn't going to change the world, it is simply that it isn't going to change the world quite that fast.

      • chasd00a day ago
        i think the point is AI has to go much further and faster than it has in the past 3 years to justify the investments being made from the hype. The hype did its job now the AI industry has to execute and create the returns they promised. That is still very much up in the air, if they can't then the tech was over hyped.
        • bigbadfelinea day ago
          This.

          It's high time to stop accumulating debt while providing free picture of pelicycles, just charge the full cost for them - enough to generate profits and pay back debt.

          What we see now is literally burning money and energy to generate hype. The only true measures of success are financial and macroeconomic. If the hype is real, there should be no problem for the mighty AI to generate debt-free profits for its providers while the overall price level in the US goes down.

          We observe the exact opposite which makes the AI hype act only as market manipulation for capital misallocation.

          • convolvatrona day ago
            unlike the old hpc, where we only burned hundreds of millions for machines that were 80% efficient to get a 5 year lead, we are burning hundreds of billions on machines that are 30% efficient to get a 1 year lead.
    • positron26a day ago
      > most of the people that were the loudest won't say they were wrong

      I was so expecting to find this wind-up aimed at those peddling the "AI is hype" laziness.

      It's laziness because they have little CS fundamentals to base such claims on, and the deductions can be made, just not clearly to people who need to study a lot more.

      It's like watching an invisible train (visible to those with strong CS) rolling down the tracks at a leisurely pace. Those sitting in their stalled car on the tracks are busy tweeting about "AI HPY PE TRAIN." Until it wrecks their car, the gimmick is free oxygen. It's a lot easier to write articles than it is to build GPUs and write programs.

      • > It's laziness because they have little CS fundamentals to base such claims on

        So, what CS fundamentals do you need to evaluate if AI is the real thing, or will disappoint in the future? Until a few months ago, coding agents were met with skepticism, until Anthropic introduced their new model and, with it, a hype train that cannot be rationally justified. Look, SOTA LLMs, and coding agents in particular, are impressive. However, current predictions about the future of software development (and the world in general) are speculative. There is little to no data showing whether AI can deliver on its promises. How could there be in this short time frame? No one knows what the future will hold, no one knows how coding agents will be integrated into our work life and everyday life in the long run, or what hard limitations they will reveal. No one can tell you how professions will change in the coming years; every prediction is purely speculative, and anyone making prophecies is either trying to cope with the uncertainty themselves or has some stakes in the AI bet. It would be nice if people were actually humble enough to admit that they have no idea what will happen in the future, instead of writing the hundredth doom and gloom post.

        • positron26a day ago
          > However, current predictions about the future of software development (and the world in general) are speculative.

          It's amazing to me how those willing to seize on the speculative nature of any ANY uncertainty cannot recognize the inherent uncertainty of the inverse.

          > what CS fundamentals do you need

          1. Tarski's undefinability theorem 2. Gödel's incompleteness theorems 3. Curry Howard correspondence

          And a lot of exposure to deductive reasoning, vague ideas of automated theorem proving and formalization.

          I won't pretend its easy, but let's be clear, a small fraction of people who know things are being forced to entertain the hysteria of a vast majority who are unwilling to know things and just go around beating their chests and will continue doing so until the train hits them.

          There are 2-3 minor architectural changes in between now and what I would identify as a completely unbounded AGI with clearly discernible dynamic, self-defined objective functions and self-defined procedures for training and inference. It can be done in megabytes. Oh god. Get me out of this forum. I wish to return to my code editor.

          • Flashtooa day ago
            What exactly are you claiming here? That a handful of theorems about the limits of mathematics and provability somehow combine to show that the current LLM-based AI developments will inevitably live up to what is expected of them? And that this is obvious to a select few? That all seems unlikely, to say the least.
          • inigyou15 hours ago
            > a small fraction of people who know things are being forced to entertain the hysteria of a vast majority who are unwilling to know things and just go around beating their chests and will continue doing so

            Describes either side

    • bigbadfelinea day ago
      AI is real but the socio-political environment is far from conductive to some form of productive use of it - as opposed to using it as a war-machine - AI isn't going to fail in that role but very few will be happy about it.

      I mean, disillusionment is the least of my worries.

  • lxgra day ago
    This just sounds like the "nothing ever happens" theorem slightly rephrased, of which Scott Alexander did a great refutation here: https://www.astralcodexten.com/p/heuristics-that-almost-alwa...
    • arduanika21 hours ago
      That's not a refutation. It's a list of silly cartoons designed to make the reader feel smart, thrown together by one of the leaders of the AI cult. Do you people know any other writers?
  • CompoundEyesa day ago
    I’m doing enterprise coding tasks that used to take a month of whole team coordination from mockups to through development and testing in 3 days now. It’s all test driven development, codex 5.3 and a small team of two people who know how to hold it right orchestrating the agents. There’s no reason not to work this way. The sociotechnical engineering aspects of this change are fascinating and rewarding to solve.
    • keriati1a day ago
      I work for an old enterprise, so far rather conservative with LLM/AI usage. However the copilot cli adoption in the last 2 weeks is spreading light wild fire. Codex 5.3, a good instructions file and it works. Features are getting done and delivered in days, proper test coverage is done, proper documentation in place. Onboarding to it is also very fast.
    • ehutch798 hours ago
      Surely the point of doing mockups is to get feedback.

      Are you just not doing that anymore?

    • shimmana day ago
      Can you give an example of such features?
      • CompoundEyesa day ago
        Porting tons of untyped js legacy front end code to vue with typescript and figma designs. Highly configurable business to business app (i.e lots of permutations). Everyone seems to have a “system”. I recommend looking at the OpenAI Cookbook for long running plans and do TDD to the extreme. https://developers.openai.com/cookbook/articles/codex_exec_p...
        • skeeter2020a day ago
          What's the feature that was built though? This sounds like low-value refactoring. They are fundamentally different development workflows.
          • CompoundEyesa day ago
            Yes porting and also implementation of new features. Typical client requests for new functionality in business to business software.
        • a day ago
          undefined
    • a day ago
      undefined
    • thesmarta day ago
      Many of my industry friends and I were skeptics about all the things the OP mentions, still am. And yet, I am able to push 30-40K lines of nearly perfect code a day now.

      It's different just like the steam engine was different, except technology moves much 100x faster now than it did then. It's different and the same.

      • mediamana day ago
        The 40k lines of code a day crows are amusing. In solving any problem solvable by code, there's a ratio of non-coding work to coding work, and codex et al all help immensely with the coding work but help less with the non-coding work.

        Non-coding work is thinking about the system architecture, thinking about how data should flow, thinking about the problem to be solved, talking with people who will use it, discovering what their objectives are.

        Producing 40k lines a code per day simply means you're not doing any of that work: the work that ensures you're building something worth building.

        Which is why the result is massive, pointless things that don't do the things people actually need, because you've not taken any time to actually identify the problems worth solving or how to solve them.

        It's a form of mania that recalls Kafka's The Burrow, where an underground creature builds and builds an endless series of catacombs without much purpose or coherence. When building becomes so easy when it was so hard -- and when it becomes more fun to build and watch codex's streams of diffs fly by, than to plan -- we forget the purpose of building, and building becomes its own purpose, which is why we usually so little actual productive impact on the world from the "40k lines of code a day" cohort.

      • rawlinga day ago
        What job are you in where you can even come up with problems that -need- 30-40k lines of code a day?
        • ozgunga day ago
          And how do you know they are nearly perfect?
          • reverius42a day ago
            The unit tests written by the LLM all pass!
            • gib44416 hours ago
              When I asked it if the tests correct it responded absolutely yes sir !

              The tests were so good they all passed before the code was fully finished and during huge refactoring they've never failed !

          • rawlinga day ago
            My 20k lines of unit tests say so?
            • Etherytea day ago
              Just because tests pass does not mean that they're testing the right thing to begin with. Reviewing tests is as important, if not even more important than reviewing code.
      • _sea day ago
        If you are pushing 40k lines of code per day you are an idiot and should be fired.
        • bigstrat2003a day ago
          I agree with your point that the original claim is unlikely to be true (and would be extremely foolish behavior even if it were true). I don't think it's good to flame people though, even if they did say something unreasonable.
          • _se20 hours ago
            Taking the high road doesn't work with this type of individual. Sometimes you need to call a spade a spade (or, more likely, call a bot a bot).
        • ericmcera day ago
          yeah... maybe he is working alone or bootstrapping a brand new thing?

          Otherwise his entire team must collectively groan when a Slack message appears: "Got a new PR ready for review everybody!"

          • slopinthebaga day ago
            Do you actually think they're reviewing anything? It's vibe coded tests validating vibe coded impls and then pushed straight to production.
      • ThrowawayR2a day ago
        > "I am able to push 30-40K lines of nearly perfect code a day now."

        It is physically and physiologically impossible for anyone to be reviewing "30-40K lines of nearly perfect code a day" to the extent needed to push it with confidence in a sensible development process.

      • vor_a day ago
        Are you really reviewing 30-40k lines of code a day?
      • fzeroracera day ago
        Why do you and many of your industry friends conveniently never actually post their 'perfect code' when asked for proof? I've asked like five different people now that make these claims and they just vanish into the ether.
        • throw10920a day ago
          Note that when code is shown, like the "browser" that was recently put on blast, it's often terrible.

          Are we experiencing a huge influence campaign on HN?

      • gib44416 hours ago
        I'm thankful that you're securing the job of many consultant/troubleshooting type devs in the near future. Good work
      • Bnjorogea day ago
        do you understand every line of code you churn out?
  • voiper1a day ago
    >Blockchain... NFTs >The problem is, the same dudes who were pumped for all of that bollocks now won't stop wanging on about Artificial Intelligence.

    I was firmly in the camp that blockchain was not a viable solution to any problem, and that NFTs sound stupid. I think AI is much different than that list. So, there goes your argument?

    • magicalista day ago
      > I was firmly in the camp that blockchain was not a viable solution to any problem, and that NFTs sound stupid. I think AI is much different than that list. So, there goes your argument?

      Squares are rectangles. The existence of rectangles that aren't squares doesn't negate that.

    • dvta day ago
      Yeah that comparison doesn't pass the smell test. Blockchain/crypto were purely financial instruments and for better or worse, a new financial instrument is very different than a new tech innovation; tbh there was a thin veneer of tech when it comes to crypto/blockchain, but the magic was because of the money, not because of the tech.

      AI is different because the magic clearly is because of the tech. The fact that we get this emergent behavior out of (what essentially amounts to) polynomial fitting is pretty surprising even for the most skeptical of critics.

    • skeeter2020a day ago
      You need to re-evaluate your logic here; if you were a Blockchain / NFT booster who doesn't believe AI is different you could argue you've disproved their argument. You have not.
    • zos_kiaa day ago
      I think the author is saying that a specific crowd, which happened to be very vocal and excited about web3 and NFTs, is also very vocal and excited about AI. In my personal experience they are right, a lot of the hustler types around me who were trying to get everyone to "invest" in digital land are now doomposting about AI.

      It's not a very legible situation for people outside of the profession, and a lot of them believe it's just another grift that will blow up in a few years.

      • a day ago
        undefined
    • insane_dreamer4 hours ago
      Except that at the time, every company was announcing that they are "doing Blockchain" the same way that they are now "doing AI"

      NFTs was always stupid; blockchain (not crypto) has plenty of real world applicability

  • tengbretsona day ago
    LLMs have not radically transformed the world yet because the number of people capable of solving problems by typing into a blinking cursor on a blank screen is actually quite small. Take that subset of the population and reduce it to those that can effectively write communicative prose, and its even smaller still.

    It's just an interface problem. The VT100 didn't change the world overnight either.

    • asdff3 hours ago
      Hiring seems to be way down in my world as a result of LLM. It isn't so much people staring blankly that I worry about, but companies thinking hey maybe we can get away with not replacing headcount for a while longer, or maybe this tool will help bootstrap the offshore team to be at parity with the expensive onshore team.
    • ibejoeba day ago
      There's another point, too. Detractors say LLMs will never advance to whatever threshold they consider meaningful. Fine. We're working on other paradigms, too, though. Just because a lot of o people are productizing LLMs doesn't mean the state of the art isn't advancing in parallel and AGI isn't in the cards.
    • ericmcera day ago
      Agree, LLMs are just another tool. Treating them as chatbots is a very basic way of using them. The future is intelligent engineers embedding them in traditional systems and having them perform specific roles.
  • waffletower6 hours ago
    The author, and even posters here in the comments, have neglected to speak of a related and highly successful technology -- search engines -- instead focusing on the broader internet as a technological development. Google's search engine product has earned billions. At a minimum, AI has disrupted, transformed and radically improved this ubiquitous technology already. That alone, and AI has many other transformative applications already productized (Claude Code anyone?), earns its place well ahead of the hyped technologies list the author has provided.
  • riddleya day ago
    Author forgot Segway. Remember when it was going to fundamentally change humanity?
    • hnuser123456a day ago
      Their Ninebot escooters are pretty damn good, far better than most random brands.

      I spent most of Covid in VRChat and met my current live-in gf, so the metaverse was real for me too.

      I also made decent money selling crypto, so that part was real for me too.

      And AI coding, for as dumb as even the best models are, still enabled me to create things that I wanted to, but wouldn't have had time or gotten nearly as far without.

      I dunno if the author realizes, but all the things they mentioned did materialize in one way or another, just not exactly how the hype described it.

      Maybe if they could let go of some of the cynicism, they could find something to be optimistic about. Nothing ever goes exactly as planned, but that doesn't mean nothing is good.

      • jna_sha day ago
        > I dunno if the author realizes, but all the things they mentioned did materialize in one way or another, just not exactly how the hype described it.

        From the post, which is not a very long one: "All of the above technologies are still chugging along in some form or other (well, OK, not Quibi). Some are vaguely useful and others are propped up by weirdo cultists"

        • hnuser123456a day ago
          Fair, I read the whole post but I guess that part didn't register, maybe because I never fullheartedly believe marketing fluff to begin with. Maybe this person has too much contact with "AI will fix everything" types, and not enough with actual scientists who are really developing novel methods better than anything before, piece by piece.

          I also found the "it's almost always dudes" line a bit strange, because I've seen plenty of women doing marketing for startups running on hype.

      • jduba day ago
        But there's a spectrum of responses to these technologies, from knee-jerk cynicism to genuine moral disgust. "Useful" and "good for people/society/humanity" don't always go hand-in-hand, particularly if you take origins and power into account.
      • [dead]
    • jjkaczora day ago
      Heh - that went right off the cliff, when... well, I will let the reader research that themselves...
      • hnuser123456a day ago
        The guy who died on one was Jimi Heselden, who was a British entrepreneur who bought the company from the American inventor, Dean Kamen. Dean is alive, however he was recently found to have hung out with the "disgraced financier".
      • Insanitya day ago
        That’s dark. But.. accurate.
    • jjmarra day ago
      I see hoverboards everywhere, which are the self balancing scooter tech from the Segway. Many little ebikes as well making deliveries.

      75% of restaurant orders are delivery now due to widespread personal electric transportation. It already has fundamentally changed humanity.

      https://youtu.be/KOSUEFqszK8

      • inigyou15 hours ago
        Segway - Eat Fresh
  • freetonika day ago
    >3D TV, AMP, Augmented Reality, Beanie Babies, Blockchain, Cartoon Avatars, Curved TVs, Frogans, Hoverboards, iBeacons, Jetpacks, Metaverse, NFTs, Physical Web, Quantum Computing, Quibi, Small and Safe Nuclear Reactors, Smart Glasses, Stadia, WiMAX.

    For what it’s worth, not a single other technology in the list made any sort of impact on my work. For better or worse, LLMs did.

    Well, okay, quantum computing actually affected me a lot because I worked at a quantum hardware manufacturer, but that’s different.

  • ozgunga day ago
    > 3D TV, AMP, Augmented Reality, Beanie Babies, Blockchain, Cartoon Avatars, Curved TVs, Frogans, Hoverboards, iBeacons, Jetpacks, Metaverse, NFTs, Physical Web, Quantum Computing, Quibi, Small and Safe Nuclear Reactors, Smart Glasses, Stadia, WiMAX.

    I’ve never heard half of the things and the other half is mostly consumer electronics or specific product names. The closest example here is Quantum Computing, which is also a serious technology in development. I think for the OP these are all tech buzzwords that he invests in without understanding what they really are. That’s why he thinks all these unrelated things are the same.

    • goatlovera day ago
      I'd say AR & VR were hyped to be as big as AI is now, but just haven't fully delivered on the promise yet. 3D printing was similarly hyped for a time. Same with blockchain. Nuclear power in the 50s was hyped to be the future of energy.

      The point is to take the hype with a grain of salt and knowledge that not all hyped technologies transformed the world as promised. Maybe AI is like the internet or electricity. But maybe the claims about AGI/ASI and full automation are just hype.

  • senkoa day ago
    The post nicely lists a bunch of failed hyped tech:

    > 3D TV, AMP, Augmented Reality, Beanie Babies, Blockchain, Cartoon Avatars, Curved TVs, Frogans, Hoverboards, iBeacons, Jetpacks, Metaverse, NFTs, Physical Web, Quantum Computing, Quibi, Small and Safe Nuclear Reactors, Smart Glasses, Stadia, WiMAX.

    ...conveniently doesn't list a bunch of hyped tech that hasn't failed:

    > microchips, PCs, the internet, ecommerce, cloud, EVs, 5G

    ...and presents this as evidence that the current hyped tech (AI) will fail:

    > Seems like you say that about every passing fancy - and they all end up being utterly underwhelming.

    When the article needs to construct disingenious arguments, I'm not interested in its conclusion.

    But wait! If you actually read to the end, there's a plot twist!

    > The ideology of "winner takes all" is unsustainable and not supported by reality.

    Who said anything about winner takes all? You just burned a "this time is different" straw man and then conclude that "winner takes all" is not realistic?

    At this moment I'm wondering if the article was in fact written by a quantized 8B LLM. Surely people don't do such non-sequiturs and then expect to be taken seriously.

    But of course not. This is not an argument. This is preaching to the choir.

    Preach, brother, preach.

    • pedalpetea day ago
      Exactly! If this post had been written 20 years ago it would have started with

      Internet, handheld computers, electric cars...The problem is the same dudes.

      Putting beanie babies in with Quantum Computing and Nuclear Power completely ignores the potential life changing elements of some technologies, even if they don't work.

      Oh, and smart glasses he put in there, so he'll be eating his words in 2 years.

      • graemepa day ago
        No-one thought the internet was a failed technology in 2006. It was a vital tool by then.

        Handheld computers were an expanding market, dominated by Blackberry.

        EVs were an immature technology but hybrids like the Prius were selling.

        • senkoa day ago
          You may remember this video featuring Facebook with a ridiculously high $15B valuation, Skype, YouTube and other failures: https://m.youtube.com/watch?v=I6IQ_FOCE6I
          • graemep10 hours ago
            No, never seen it. Why would I?

            There is a huge difference between claiming that there is an investment bubble in an industry and some companies are overvalued and that the technology is a failure. Someone might well think that Tesla is very overvalued, but that EVs are successful. If someone thinks there is a house price bubble that does not mean that they think houses are a failed technology.

          • inigyou15 hours ago
            This YouTube link proves that YouTube is a failure?
            • senko13 hours ago
              Have you seen the video?
      • edenta day ago
        OP here. If you're interested, here are my thoughts on Google Glass from 12 years ago.

        https://shkspr.mobi/blog/2014/04/quick-thoughts-on-google-gl...

        I am looking forward to 2028 matching the hype of 2014.

  • hdgvhicv14 hours ago
    I laughed at all those except Quantum Computing and Small Nuclear reactors which I did t have a timeframe for. I suspect that small nuclear will have been overtaken by renewables.

    AI concerns me, it feels like it will come faster and at least as impactful on workers as the Industrial Revolution. The latter at least occurred over centuries and didn’t apply globally at the same time.

    Is this round hype? Probably. Are we heading for a y2k crash? Probably.

    However those who laughed at the dotcom boom and doubled their holdings in department stores and blockbuster video didn’t do well in the long run.

  • thih916 hours ago
    No; the shift to AI seems to me like the shift to smartphones. Sure, not the only tool, but very significant and affecting everyday life. Unlike Stadia, 3d tv, or other examples listed in the article.
  • Windchasera day ago
    For me, this captures it:

    "All of the above technologies are still chugging along in some form or other (well, OK, not Quibi). Some are vaguely useful and others are propped up by weirdo cultists. I don't doubt that AI will be a part of the future - but it is obviously just going to be one of many technology which are in use.

    > No enemies had ever taken Ankh-Morpork. Well technically they had, quite often; the city welcomed free-spending barbarian invaders, but somehow the puzzled raiders found, after a few days, that they didn't own their horses any more, and within a couple of months they were just another minority group with its own graffiti and food shops.

    - Terry Pratchet's Faust Eric"

  • GMoromisatoa day ago
    I get that everyone has a strong opinion on whats-going-to-happen-with-AI, but I really think nobody knows.

    We're in that part of turbulence where we don't know if the floating leaf is going to go left or right.

    The people who will have the hardest time with this transition are those who go all in on a specific prediction and then discover they were wrong.

    If you want to avoid that, you can try very very hard to just not be wrong, but as I said, I don't think that's possible.

    Instead, we need to be flexible and surf the wave as it comes. Maybe AI fades away like VR. Or maybe it reshapes the world like the internet/smartphones. The hardest thing to do right now, when everyone is yelling, is to just wait and see what happens. But maybe that's the right thing to do.

    [p.s.: None of this means don't try to influence events. If you've got a frontier model you've been working on, please try to steer us safely.]

  • tim333a day ago
    I always figured AI would be a big deal from childhood onwards and wrote about it for my college entrance exam in 1980 or so. That doesn't apply to any of

    >3D TV, AMP, Augmented Reality, Beanie Babies, Blockchain, Cartoon Avatars, Curved TVs, Frogans, Hoverboards, iBeacons, Jetpacks, Metaverse, NFTs, Physical Web, Quantum Computing, Quibi, Small and Safe Nuclear Reactors, Smart Glasses, Stadia, WiMAX

    It's quite a different thing, more on the level of the evolution of life on earth and quite unlike all that junk.

  • raintreesa day ago
    to me, ai seems likely to become a new user interface, just like gui did from cli.

    abstract away a lot of the mechanics of working with data/information.

    helpful, when literacy seems to be trending in a downward direction.

  • halis6 hours ago
    You forgot about crypto, 3D printing and games with virtual real estate.
  • wewewedxfgdfa day ago
    When non-programmers make sweeping statements about LLMs.

    Deep disconnect from reality.

  • hinkley15 hours ago
    This time is different. That’s why we are even discussing it.

    The problem is that this time is 20% different, not the 80% people are implying it is. So the same things that killed it last time will kill it again, unless that 20% has gotten us up some stairstep we got stuck on last time. But then the next thing will get us and we will go back to a new and improved version of the old thing.

  • a day ago
    undefined
  • sunaookami13 hours ago
    >and it was nearly always dudes

    Yeah I know what you are, don't try to pretend.

    • Gud8 hours ago
      And wtf does it even matter? Such a weird comment to even make.
    • keybored13 hours ago
      Dude complaining about dudes. It’s nearly always like that.
  • NickNaraghia day ago
    Perhaps this is the failure to understand the distinction between a technology and a meta-technology. Upgrading the factory that builds the robots is much different than upgrading the robots.
    • Joker_vDa day ago
      A technology is a set of methods and tools for achieving the desired results (generally in a reliable and reproducible way). Or, in a broader sense of the word, it's the idea of applying scientific knowledge to solving practical problems, and the process of such application.

      What is meta-technology?

    • MarkusQa day ago
      Or (taking the other side) failure to notice the distinction between a technology and a pump-and-dump. The technology (attention/diffusion) is awesome. The hype is unbelievable. Literally.
  • chasd00a day ago
    Everything is the same until it's not, good luck predicting when "until it's not" is on the horizon though. Isn't technology innovation a power law thing? Everything hums along fairly regular and then, out of the blue, there's a massive impact. Personally, I think AI has made a pretty large impact in software dev and overall tech industry but I don't see AGI any time soon (and that hype has died down) and therefore I don't see the economics working out. The coding tools, API integrations, chatbots, those are great but I don't see them producing the returns required to keep companies like OpenAI running unless OpenAI takes all the customers and all the ad clicks from everyone else ( Athropic, Alhphabet, X, Amazon, Meta, even Microsoft ). I just don't see that happening.
  • busko20 hours ago
    If you believe marketing hype, then that's exactly what it is... Hype.

    If you speak to industry professionals and retain a healthy scepticism, you don't have to look far to find people that absolutely do not believe the marketing.

    Quite frankly I like that advances in say quantum computing are publically announced. The hype around what that means for society and our view of the universe is probably where you want to put on that reserved scepticism hat.

    Similarly smart glasses were and are a thing, but society is rightly apprehensive about the impact, so the hype has dropped off.

  • smitty1ea day ago
    This Andrew Klavan interview on AI is worth your time, if not an independent submission:

    https://www.youtube.com/watch?v=SZFhFGpDWGw

    "Today, I'm speaking with Stephen C. Meyer, Director of The Discovery Institute's Center for Science and Culture, and and George D. Montañez, Director of the AMISTAD Lab at Harvey Mudd College–both of whom are extremely knowledgable on the topic of artificial intelligence. During the course of our conversation, they discuss the asymmetry between human intelligence & AI, the inability of AI to ascribe meaning to raw data, and the limitations of large language models. The real question though is: are we screwed? Let's find out."

  • Dwedita day ago
    Use the reader view button.
  • pgt15 hours ago
    No, this time is different.
  • madroxa day ago
    I got my first tech job in 2001. I've been doing this a while and ridden all the waves.

    There are two kinds of waves. The ones that don't require collective belief in them to succeed, and those that do.

    The latter are kinds like crypto and social media. The former is mobile...and AI.

    If no one else in the world had access to AI except me, I would appear superhuman to everyone in the world. People would see my level of output and be utterly shocked at how I can do so much so quickly. It doesn't matter if others don't use AI for me to appreciate AI. In fact, the more other people don't use AI, the better it works out for me.

    I'm sympathetic to people who feel like they are against it on principle because scummy influencers are talking about it, but I don't think they're doing themselves any favors.

    • bigstrat2003a day ago
      > If no one else in the world had access to AI except me, I would appear superhuman to everyone in the world.

      You really wouldn't. AI simply isn't that useful because it is so unreliable.

      • madroxa day ago
        I have found that to be utterly untrue
  • Nevermark20 hours ago
    What do these lasting success stories have in common?

    • Self-reinforcing chemical metabolisms

    • DNA as a template for reproduction

    • Multi-cell cooperation

    • Multi-cell specialization

    • Nerve cells

    • Neural ganglia

    • Nervous systems

    • Brains

    • Self-awareness

    • Language

    • Written language

    • Books

    • Printing press

    • Wireless communication

    • Transistors

    • Digital memory

    • Computer processors

    • Networking

    • Internet

    • AI

    Answer: They all introduced dramatic qualitative and quantitative improvements in the efficiency, effectiveness, interaction, speed, reliability, flexibility, adaptability, and application of information.

    AI is on its way to being self-designed. It is already assisting in its own design, speeding up work, by doing "mundane" things that would otherwise take people more time to do.

    Intelligence has not been an S-curve technology.

    AI, the systematic automation, manufacturing and increasingly recursive improvement of intelligence, is not an S-curve technology.

  • cladopaa day ago
    A very cynical article.

    Actually IT IS different. Actually if they manage to create a viable small nuclear reactors or Quantum computers the world will change like it changed with the Watt thermal engine.

    Why he is not talking about the Internet, trains, electricity, nuclear bombs, rockets,aviation or engines? Because they worked, like AI works today.

    All of them were bubbles at the time and they changed the world forever. AI is changing the world AND it is a bubble.

    AI is here to stay. It will improve and it will have consequences. The fact that a robot could do things with its hands is actually significant, whenever you like it or not.

    • bigstrat2003a day ago
      > Why he is not talking about the Internet, trains, electricity, nuclear bombs, rockets,aviation or engines? Because they worked, like AI works today.

      Except for the minor bit that AI doesn't work today, and it is not yet clear if it ever will.

      • Gud8 hours ago
        It does work
  • stego-techa day ago
    Honestly, the remixes this generation suck compared to priors.

    "This time will be different," they said about the Metaverse, ignoring the vast tranches of MUCKs, MUDs, MMOs, LSGs, and repeated digital real estate gold rushes of the past half-century. Billions burned on something anyone who played Second Life, Entropia, FFXIV, EQ2, VRChat, or fucking Furcadia could've told you wasn't going to succeed, because it wasn't different, it just had more money behind it this time.

    "NFTs are different", as collectors of trading cards, art prints, coins, postage stamps, and an infinite glut of collectibles looked at each other with that knowing, "oh lord, here we go" glance.

    "Crypto is different", as those who paid attention to history remembered corporate scrip, gift cards, hedge funds, the S&L crisis, Enron, the MBS crisis, and the multitude of prior currency-related crises and grifts bristled at the impending glut of fraud and abuse by those too risky to engage in traditional commerce.

    And thus, here we are again. "This time is different", as those of us who remembered the code generators of yore pollute our floppy drives and salesgrifts convinced our bosses that their program could replace those expensive programmers roll our eyes at the obvious bullshit on naked display, then vomited from stress as over a trillion dollars was diverted from anything of value into their modern equivalent - with all the same problems as before.

    I truly hate how stupidly people with money actually behave.

    • paulddrapera day ago
      Is this “nothing ever happens”
      • inigyoua day ago
        Short of insider trading, betting on things not happening is apparently the best way to make money on those shady betting sites.
  • BoppreHa day ago
    What is the point being made here? Some past technologies were overhyped, therefore AI is overhyped? Well, some past consumer technologies did change the world (smartphones, texting, video streaming, dating apps, online shopping, etc), so where's the argument that AI doesn't belong to this second group?

    Also, every single close friend of mine makes some use of LLMs, while none of them used any the overhyped technologies listed. So you need a specially strong argument to group them together.

  • whynotminota day ago
    This lazy kind of post annoys me because it sort of groups any of us saying that this technology is profoundly different in with all the town criers who have said this kind of thing before — even if we have never said it before and were even skeptical of past declarations

    Effectively, it’s a statement saying nothing can ever be profoundly different, because people have said it before and been wrong.

    Lazy.

  • CrzyLngPwda day ago
    Said elsewhere on this post... "AI is a bubble!" "AI will change everything!"

    Is just propaganda...

    Iran is 2 weeks from a nuclear weapon We obliterated Iran's nuclear dreams

    Russia is fighting with shovels Russia is on the verge of swarming Europe

    What would Joost Meerloo say about it, I wonder.

  • space_invaders19 hours ago
    I love how people run to shout "this time is different, eh?" in an ironic tone while ignoring that a lot of times, yes, it was different.

    Covid was different -- people dismissed it initially saying it was going to be like the 2009 Swine flu or the seasonal chicken flu we see on media.

    The iPhone was different -- many columnists said it was just a fancier PDA and that Palm already had the market.

    The 2008 crisis was different -- the signs of a housing bubble were present but were dismissed. The derivatives made it different and it imploded.

    There are times when things are actually different and you should be able to identify them. AI is one of them.

    I don't even need to elaborate much, as a programmer it's clear how this a game-changer. We are moving past the era when programs were just predictable if/else chains with regex to a world where you can accept non-deterministic, never-before seen inputs and have them to be interpreted accurately. Just like the Internet added another "dimension" to computer applications, AI is now adding another "dimension" previously unreachable.*

    * Just as you could make a big local LAN before the Internet, it's obvious that we had past incarnations of the current technology that gave some taste of that dimension, but did not fully "unlock" it.

    This time, it's truly different.

  • TeamDmana day ago
    I enjoyed Dave Cridland's comment more than the article. The article is dismissive of AI and other technologies in an unsubstantiated way.

    New things are happening and it's exciting. "AI bad" statements without examples feel very head-in-sand.

    • edenta day ago
      OP here. Unless you're still watching Quibi on your curved TV, delivered via WiMax then, yeah, I'd say it was pretty bloody substantiated.

      I like technology. I made a decent living from it. But if I had chased every hyped fad that was promised as the next big thing, I doubt I'd be as happy as I am now.

      • tsumniaa day ago
        Just chiming in to say thanks for the Pratchett quote! I dare say he's about to beat out Douglass Adams for my top author. Feets of Clay and Hogfather should be must reads for people dealing with AI right now imo.
      • jmkda day ago
        You claim to cite 'technologies' but include a few brands and companies for some reason.

        The one you keep citing, here and in the article, Quibi, lives on in technology-form (the spirit of your article we must presume) as an 8 billion dollar business in China and is rapidly upending every Hollywood film studio.

        So, arguments about substantiation or even 'this time' fall flat in the face of not even understanding your own message.

      • troosevelta day ago
        You're not really saying anything, though. For every tech hype that has failed, there is another that's changed the world. This IS changing the world and our industry, regardless of whether it reaches the heights of the hypers.

        I mean you're just stating that sometimes tech doesn't meet it's hype. What's insightful about that? It's a given; cherry-picking examples doesn't prove your case.

        • Joker_vDa day ago
          > For every tech hype that has failed, there is another that's changed the world.

          Well, no, the ratio is most definitely not 1-to-1.

        • edenta day ago
          The thing is, the successful tech rarely get the excessive hype.

          MRNA vaccines. Where are the countless breathless articles about these literal life saving tech? A few, maybe, but very few dudes pumping out asinine "white papers" and trying to ride the hype train.

          Solar and battery. Again, lots of real world impact but remarkably few unhinged blowhards writing endless newsletters about how this changes everything.

          I'm struggling to think of a tech from the last 20 years which has lived up to its hype.

          Not everything is written to be insightful. Some things are just written to get them out of my head.

          • ravioli_foga day ago
            I personally see plenty of hype but I've also been following the trends and using the tools "on the ground". At least in terms of software these tools are a substantial shift. Will they replace developers? No idea, but their impacts are likely to be felt for a very long time. Their rate of improvement in programming is growing rapidly.

            Do feel AI is overall just hype? When did you last try AI tools and what about their use made you conclude they will likely be forgotten or ignored by the mainstream?

            • edenta day ago
              I spent an hour with Gemini this morning trying to get instructions to compile a common open source tool for an uncommon platform.

              It was an hour of pasting in error messages and getting back "Aha! Here's the final change you need to make!"

              Underwhelming doesn't even begin to describe it.

              But, even if I'm wrong, we were told that COBOL would make programming redundant. Then UML was going to accelerate development. Visual programming would mean no more mistakes.

              All of them are in the coding mix somewhere, and I suspect LLMs will be.

              • ej88a day ago
                > write an article dismissing ai

                > usage is copy pasting code back and forth with gemini

                the jokes write themselves

                • edenta day ago
                  That's the most recent time. But I've bounced around all the LLMs - they're all superficially amazing. But if you understand their output they often wrong in both subtle and catastrophic ways.

                  As I said, maybe I'm wrong. I hope you have fun using them.

                  • stnmtna day ago
                    Have you tried a coding agent such as claude code or codex?
                    • edenta day ago
                      Yes. And, again, they look amazing and make you feel like you're 10x.

                      But then I look at the code quality, hideous mistakes, blatant footguns, and misunderstood requirements and realise it is all a sham.

                      I know, I know. I'm holding it wrong. I need to use another model. I have to write a different Soul.md. I need to have true faith. Just one more pull on the slot machine, that'll fix it.

          • nozzlegeara day ago
            Unrelated to the conversation but:

            > Not everything is written to be insightful. Some things are just written to get them out of my head.

            I like that, going to use it as the motivation to get some things out of my own head.

            • edenta day ago
              Yes! More blogging :-)
          • JuniperMesosa day ago
            Why do you think that solar+battery technology or MRNA vaccines haven't been written about in excited, hype-filled ways? If a technology is successful, then looking at past accounts of that technology and why it will change the world don't come across to you reading it now as hype, they come across as a description of something normal about the world.
          • casey2a day ago
            The web? GLP-1s? 5G? The newton was mega-hyped, failed but Apple came back with the iPhone. All the dot com failures that eventually became viable businesses (so viable in-fact that sfgate has to reach back 26 years to write their stinkpiece [1])

            Hype is often early, in 10-20 years we'll start seeing the value as the rest of the world catches up

            https://www.sfgate.com/food/article/rise-fall-bay-area-start...

    • a day ago
      undefined
    • MarkusQa day ago
      It's not unsubstantiated though. The claim is "People frequently assert that 'this time is different' and they are almost always wrong" and it proceeded to provide a reasonable list of analogous manias.

      This only doesn't feel like substantiation if you reject the notion that these cases are analogous.

      "You shouldn't eat that."

      "Why not?"

      "Everyone else who's eaten it has either died or gotten really sick."

      "But I'm different! Why should I listen to your unsubstantiated claims?"

      "(lists names of prior victims)"

      "That doesn't mean anything. I'm different. You're just making vague and dismissive unsubstantiated claims."

      The claim isn't "AI bad" the claim is more along the lines of "there's a lot of money changing hands and this has all the earmarks of a classic hype cycle; while attention/diffusion models may amount to something the claims of their societal impacts are almost certainly being exaggerated by people with a financial stake in keeping the bubble inflated as long as possible, to pull in as many suckers as possible."

      If you want another example (which you won't find analogous if you've already drunk the koolaid):

      https://theblundervault.substack.com/p/the-segway-delusion-w...

  • bogzza day ago
    LLMs are really a marvel, GPT 2 actually inspired me to go back to college (not directly, rather I needed to understand how it worked).

    I have unlimited derision for morally spineless worms who disingenuously make it out to be more than it is-- looking at Dario, Sam, and the silly CEO of Control AI. Also, I hate to say it but Andrej Karpathy on twitter-- he's a worthless follow now. I can't blame, but am daily exasperated by media figures who can't help but go with what they hear prominent individuals in the field say.

    If I were a junior now, and less confident, I would be abandoning my career in this climate.

    LLMs are not going away. They will get a little better than they are now, and new model paradigms will come around at some point. But this tale of massive redundancy and skyrocketing unemployment is not going to come from LLMs.

    This is the only reason why I cannot wait for a pop, and pray to God that it comes sooner than later. I just want to feel good about technology again. I want to tinker, to feel positivity, to know how sustainable the tools I'm using actually are.

    I don't want to be reminded daily of the disgusting reality of unbridled capitalism.

  • almostherea day ago
    For all of those, there is a gartner hype cycle. The thing that matters is when it comes out the back end, is 1m, 1b, 6b people using it?

    for all the things you listed, less than 1000 people are using it, with AI we're clearly not finished with the gartner hype cycle, but the back end is going to be over a billion users.

  • dist-epocha day ago
    Nuclear weapons - this time is different

    Internet - this time is different

    iPhone - this time is different

  • If you can't distinguish the actual utility and progress of AI from it's annoying hype-men then it's hard to take your dismissal of AI seriously.

    Failure to appreciate changes in AI will have left you calling every shot wrong over the past 5 years. While AI models continue to improve at an exponential rate, you'll cling to your facile maxims like "dude it's just predicting the next token it isn't real intelligence".

  • hotena day ago
    this just looks like someone hearing about tons of hyped things from people across the internet (which almost by definition, is full of false signals and grifters), imagining they are coming from the same person, then arguing with how wrong that person always is. how is that interesting?
  • redwooda day ago
    I hoped the article would be be a meta-discussion of "time" and perhaps relativity or some other phenomenon. Sigh, it's an investment thesis saying "This Time is Different" is a risky bet.
    • edenta day ago
      That sounds like an interesting article. You should write it.
      • goatlovera day ago
        Or have an LLM write it and then we can judge whether the OP is wrong about whether "this time is different".
  • xeckra day ago
    Blatant strawman.
  • kyproa day ago
    I invested in Tesla extremely early (2011) because electric cars, if built correctly, would obviously make great cars, and Elon was one of the few people I actually thought had a shot at doing it.

    I was right that blockchain was BS and all the "not sure about Bitcoin, but blockchain will be big" people were idiots.

    I've been right for the last couple of years on AI and that people were vastly underestimating it when it came to it's coding potential. And I put my money where my mouth was here. In 2021 when GPT-3 came out I decided almost immediately I needed to invest a significant amount of my net worth in Google simply as hedge against AI destroying knowledge work jobs. Which at the time I thought was probably going to happen around 2030 not realising how far LLMs could go with reasoning.

    I'm not particularly intelligent ("only" top 1-2% IQ), but my ability to predict the future is very good. If you have a skill you're unusually good you might relate to how it's strange other people find it so hard to do that thing you find kinda easy. For me that's predicting things and computers.

    Since I was a young teen I have been worrying about AI. Most of my IRL best friends I have made from talking about AI risk in 2010s when I was studying AI.

    Admittedly I got some of the details wrong back then. In 2010 I thought a lot of manual labour jobs would be easier to automate first – warehouse work, mail, taxis, buses, trains, etc. I worried primarily about the economic and political ramifications, and much less about ASI scenario (at least in this half of the century). But I think still I got the general timeframes and direction right. This was the decade I was concerned about.

    I'm so scared right now... My whole life I've had nightmares about AI. I know there are some people who talk about how AI is an existential risk, but it feels like they don't internalise it like I do. They're not prepping like me for one, not that you really can prep for what's coming. If they're concerned why don't they have the nightmares of the omnipresent AI which you can't out think or punch to protect those you love? AI is so powerful in the scariest ways. Super viruses, mass surveillance and control, mind reading, unimaginable sci-fi weapons. It's like a horror story, but suddenly real.

    I am an OG AI doomer, but until the last few months I've at least always had some doubt in my mind about whether I'm right, perhaps not about the risk of AI broadly, but about whether we'd actually be able to develop highly capable AIs while I still have a lot of my life ahead of me.

    In my opinion this time is different, and what I've been worrying about for the last couple of decades is now here.

    We are collectively the indigenous peoples of America and the Europeans have just arrived in the new world. The risk vectors are now endless and how this all plays out is hard to know exactly. What we do know is that the majority of ways this will play out are bad, and some are incomprehensibly bad. Some may achieve status and wealth in the near-term, but longer-term we're all dead, or worse.

    I always worry these comments make me sound like a lunatic, I think I am, but I hope I am. I hope you will all forgive me, but I just need to shout about this tonight while I still can. We need to stop this insanity. Data centers need to be nuked. You may doubt me now, but in time you will understand. Hopefully I won't be around to say I told you so. Please make the best of the time we have left.

    • anonnona day ago
      > I needed to invest a significant amount of my net worth in Google simply as hedge against AI destroying knowledge work jobs

      I felt similarly, and did similarly, with both GOOGL and MSFT. I'm not an AI "doomer" in the Yudkowski/LolzWrong sense, but I do think it's quite sad that generative AI is first branch of the AI "tech tree" we raced up. AI art, especially, is tragic.

    • [dead]
  • jasonlotitoa day ago
    I feel like someone is in a bubble of Crypto-bros. That does not instill confidence.
  • I would suggest editing the title to "This Time is Different". I think that captures the essence much better.

    Love the Sir Terry reference.

    • javawizarda day ago
      I wonder if that was an automated HN edit?

      Similarly to how titles that start with "how" usually have that word automatically removed.

      • some_furrya day ago
        Usually HN only auto-edits on first submission. If you go in and undo it manually as the submitter, you can force it to read how you intend.
        • meatmaneka day ago
          Maybe I'm only noticing the times when it messes things up, but it kinda seems like these auto-edits cause a lot of confusion that could be avoided if they were shown up-front to submitters, who would then have the option to undo them.

          Or maybe judicious use of an LLM here could be helpful. Replace the auto-edits with a prompt? Ask an LLM to judge whether the auto-edited title still retains its original meaning? Run the old and new titles through an embedding model and make sure they still point in roughly the same direction?

        • pinkmuffinerea day ago
          oh interesting, TIL I can go edit my submission titles! That's useful, I've definitely submitted stuff and gotten a less-good title due to the automated fixes, so I'll have to pay attention to this next time
    • wlesieutrea day ago
      And the HTTP headers

          x-clacks-overhead GNU Terry Pratchett
    • GMoromisatoa day ago
      Agreed--I clicked to read an article about the physics of time or something. Was sorely disappointed.
  • pavel_lishina day ago
    Title got mangled somehow, the original title is "This time is different".