150 pointsby imasl4221 hours ago30 comments
  • andy9919 hours ago
    Cell phones have been through how many generations between the 80s and now? All the past generations are obsolete, but the investment in improving the technology (which is really a continuation of WWII era RF engineering) means we have readily available low cost miniature comms equipment. It doesn’t matter that the capex on individual phones was wasted.

    Same for GPUs/LLMs? At some point things will mature and we’ll be left with plentiful, cheap, high end LLM access, on the back of the investment that has been made. Whether or not it’s running on legacy GPUs, like some 90s fiber still carries traffic, is meaningless. It’s what the investment unlocks.

    • test655411 hours ago
      I'm worried all that cheap easily accessible LLM capacity will be serving us ads if we're lucky and subtly pushing us to use brands that pay money if we're not.

      If AI says don't buy a subaru it's not worth the money, then Subaru pays attention and they are willing to pay money to get a better rec. Same for Univerisites. Students who see phrases like "If the degree is from brown flush it down" (ok hyperbole, but still) are going to pick different schools.

      • SR2Z9 hours ago
        I think people have more memetic immunity than you're giving them credit for. We're in the early days, people don't fully understand how to treat ChatGPT's outputs.

        Soon enough, asking an LLM a question and blindly trusting the answer will be seen as ridiculous, like getting all your news from Fox News or Jacobin, or reading ads on websites. Human beings can eventually tell when they're being manipulated, and they just... won't be.

        We've already seen how this works. Grok gets pushed to insert some wackjob conservative talking point, and then devolves into a mess of contradictions as soon as it has to rationalize it. Maybe it's possible to train an LLM to actually manipulate a person towards a specific outcome, but I do not think it will ever be easy or subtle.

        • rkomorn9 hours ago
          You mention Fox News and people knowing when they're manipulated and I struggle to see how that squares with the current reality of Fox News being the most popular news network and rising populism that very much relies on manipulation.
          • raxxorraxoran hour ago
            People want to be manipulated in this case and Fox is just delivering their enemy of choice.
            • rkomorn29 minutes ago
              > People want to be manipulated

              I don't believe people explicitly (or maybe knowingly) want to be manipulated, though.

      • grafmax6 hours ago
        It’s a tried and true method of Silicon Valley VCs. Produce something as a loss leader. Build a moat. Then extract rent. Not only can you stop having to produce anything of value, you can even degrade your product and people won’t be able to leave thanks to lock-in.

        We wonder why the US has lost or losing competitiveness with China in most industries. Their government has focused on public investment and public ownership of natural monopolies, preventing rent extraction and keeping the costs of living lower. That means employers don’t have to pay workers as much so their businesses can be more competitive. Contrast with the US whose working class is parasitized by various forms of rent extraction - land, housing, medicine, subscription models, etc. US employers effectively finance these inefficiencies. It’s almost like the US wants to fall behind.

    • tuatoru18 hours ago
    • crote18 hours ago
      > At some point things will mature and we’ll be left with plentiful, cheap, high end LLM access, on the back of the investment that has been made.

      Okay, and? What if the demand for LLMs plummets because people get disillusioned due to the failure to solve issues like hallucinations? If nobody is willing to buy, who cares how cheap it gets?

      • dcre17 hours ago
        The idea that people will suddenly decide LLMs are useless after two years of exponential growth in usage is a fantasy.
        • hattmall12 hours ago
          That's essentially what was said about Tamagotchi in the 90s, and everyone of their "users" was a paying customer.
          • dcre3 hours ago
            Thank you for proving my point. People waiting for a big, sudden crash must insist that the growth is driven solely by hype and FOMO. This is absurd. We’re talking about professionals with decades of experience using these things for hours a day in their area of expertise.
          • imtringued8 hours ago
            The problem with generative AI is that its economics are more like steel. Metals used to be extremely valuable due to the amount of effort needed to extract them from ores, but eventually the steel industry matured and steel turned into a commodity with hardly any differentiation. You won't be able to back up your trillion dollar valuations if a competitor is going to undercut you.
          • bsjaux6289 hours ago
            You are comparing a toy that had no knowledge exchange, learning or improvement capability, or cult like enterprise adoption with LLM.... You might want to rethink your counter example
      • mmh000017 hours ago
        Have you seen the "average" users of LLM? They mostly don't care about hallucinations. It's like that joke[1] "Computer says no", it doesn't matter what is true or real, only that the computer said it, so now that's true in their mind.

        Personal anecdote: I work with beginner IT students, and a new, fun (<- sarcasm, not fun) thing for me is the amount of energy they spend arguing with me about basic, easily proven Linux functionality. It's infuriating, the LLM is more believable than the paid professional who's been doing it for 30 years...

        I find it highly doubtful that hallucinations, if unsolved, will have any real negative effect on LLM adaptation.

        [1] https://en.wikipedia.org/wiki/Computer_says_no

        • alextingle11 hours ago
          I think it's possible that people will learn not to trust LLMs.
          • f4uCL9dNSnQm3 hours ago
            It won't happen due to Gell-Mann amnesia. It "helps" that LLMs never admit to not knowing something. So from average user point of view it looks like the agents know everything and are only under-educated in the one domain the user is actually the specialist.
          • saulpw10 hours ago
            I think it's possible but highly unlikely, given all the previous iterations of technology we've seen people come to trust despite its limitations.
            • SR2Z9 hours ago
              Like what? Every new form of media has a wild-west phase where people are credulous; eventually people end up jaded.
        • Our_Benefactors8 hours ago
          > It's infuriating, the LLM is more believable than the paid professional who's been doing it for 30 years...

          Swallow your fury, and accept the teachable moments; AI isn’t going away and beginners will continue to trust it when they lack the skills to validate its information on their own.

      • AnimalMuppet18 hours ago
        LLMs are less of a miraculous new world than the hype machine says they are, but they are not nothing. They have some actual uses. It will fulfill those uses. There will be some people who still buy.
    • octoberfranklin17 hours ago
      The demand for cellphones never experienced a bubble-pop.
  • CuriouslyC20 hours ago
    • taurath15 hours ago
      To me this reads like someone's been inside the echo chamber too long.

      > For you AI skeptics, this is going to be rough to hear, but given the suffering that will occur if AI fails, I believe we have a moral imperative to make sure it succeeds.

      What if, and hear me out here, the prediction is that it will fail because it can't succeed. It hasn't proven it is capable of producing the results that would justify itself. That there is some value doesn't mean that it will develop that capability. We have nothing to say that it will.

      • CuriouslyC15 hours ago
        I'm actually from a science, government and defense background, I've never worked in the valley.

        It might be possible that it can't succeed, but we don't know that and the evidence in terms of what happens if it fails is pretty compelling, so I think morality dictates we have to at least try, at least until we have more information or another path emerges.

        • taurath8 hours ago
          Morality would dictate that we stop spending enough money to end hunger in not just our country but the entire planet on this stupid venture. We are gutting our future entirely on our own and hoping that AI will save us. That’s moral suicide, not an imperative.
          • CuriouslyC6 hours ago
            I didn't argue against that at any point, but you have to understand the political realities we're operating within.
        • overfeed10 hours ago
          Where's this can-do attitude towards housing, healthcare and stronger social safety in general? It sounds like woo to me, more specifically, Roko's basilisk by proxy.

          What's doubly duplicitous is that even if LLMs achieve general intelligence, the whole idea is to enrich a tiny fraction of humanity that happens to be shareholders.

          • gsf_emergency_410 hours ago
            AGILMs that have anything like a working internet connection will likely find a way to replace these shareholders sureptitiously --- and without alerting/injuring their caregivers. how you feel about that? depends on your temperament..

            EDIT: trying to address the Roko part..I'm assuming once AGI is achieved.. the AGI doesn't need more compute to increase its intelligence beyond those of an average activist employee (I can assure you that in OpenAI there are such employees, and they know to shut up for now)

            the antisocial part: it's already happening. What can you do about that.

            • overfeed10 hours ago
              More likely than not, they'd work as they were designed to: increase profitablity of whatever company that authored them.

              As a thought experiment, say you were the CEO/board member of a company that's told your new platform is choosing public benefit over profits. What would you do? Now filter down the decisions going down the heirarchy, considering job security and a general preference for earning bonuses.

              For all the discussions around "alignment" - any AI that's not aligned with increased profits will be unplugged posthaste, all other considerations are secondary.

              • 9 hours ago
                undefined
        • gsf_emergency_413 hours ago
          The funny thing about disembodied superintelligence-- what if a collection of normal intelligences are fundamentally unable to extract net value from it?

          The first example that comes to mind is OpenAI's Sebastian Bubeck (& others) giving underwhelming examples to illustrate that GPT has surpassed human mathematicians. Or earlier, when SamA says that he has seen things you wouldn't believe, whereas the top replicant in a proper sci-fi will at least be specific about the I-Beams.

          Another parallel which you would be familiar with is nuclear power. Humans can plainly see it's an unworldly tech, but I'd say the value extracted so far has been a net negative-- mainly because our collective engineering chops are just too.. profane. And SamA/Musk/Thiel/Luckey just don't seem as wise as Rickover (who is the Rickover of our age? I think the Rickover pipeline is broken, tbh)

          From my vantage point, I agree with you: China sees AI as less important as solar, so charitably, I'd say that Thiel's contrarian strategy is actually to get the Chinese so riled up they zuzwang themselves into providing the progress but keeping the whirlwind for themselves (so proving he has learnt the post-mimetic game from the atomics opening)

          There could be another interesting post-mimetic line exploring how a hybrid chabuduo-"festina-lente" culture is required for developing supertech.. which.. only a Rickover can play

          (I don't know if you're army or navy but there's this millenniums old cultural divide between a conservative army engineering culture -- that develops atomics -- and a progressive naval engineering culture -- that makes headway harnessing it. AI bulls smell like they are more aligned with land-forces

        • 8 hours ago
          undefined
        • saubeidl7 hours ago
          You're making the (imo incorrect) assumption that a revolutionary moment isn't the exact thing the US needs right now. Morality dictates we have to try and force one.
          • CuriouslyC6 hours ago
            We need a change, but I don't think we have the unity and will to force a revolution right now. I think our best chance is a massive win in the 26 elections, but barring that I think we're struck bailing out the oligarchs for now, and striking back with collaborative economic warfare.
            • paulryanrogers4 hours ago
              > ...striking back with collaborative economic warfare.

              What does that look like?

              • CuriouslyC3 hours ago
                I outline it in my article. Don't do business with unethical companies. If 50% of the American populace used its power of the purse in a collaborative fashion, it could kill every unethical company within 1-2 years.
        • 9 hours ago
          undefined
    • zerof1l18 hours ago
      Very US-centric article. Written by insecure people who are clinging to power and money desperately.

      I don't see how some kind of big breakthrough is going to happen with the current model designs. The superintelligence, if it will ever be created, will require a new breakthrough in model architecture. We've pretty much hit the limit of what is possible with current LLMs. The improvements are marginal at this point.

      Secondly, hypothetically, the US achieves superintelligence, what is stopping China from doing the same in a month or two, for example?

      Even if China achieves a big breakthrough first, it may benefit the rest of the world.

      • CuriouslyC18 hours ago
        It's a US/China centered article, because that's the game, Europe is not a meaningful player and everyone else is going to get sucked into orbit of one of the superpowers.

        If you read the article carefully, I work hard to keep my priors and the priors of the people in question separate, as their actions may be rational under their priors, but irrational under other priors, and I feel it's worth understanding that nuance.

        I'm curious where you got the writer "clinging to power and money desperately."

        Also, to be fair, I envy Europe right now, but we can't take that path.

        • protocolture17 hours ago
          >It's a US/China centered article, because that's the game

          The game seems to be everyone else waiting for the R&D money to provide us amazing open source models, and then just run those locally.

        • saubeidl9 hours ago
          I read your article.

          My cynical take is that this is the US committing economic suicide, based on a misguided belief in something that'll never happen.

          The new superpowers will be the EU, which was smart enough not to make the same gamble, and China, which will structurally survive it.

          I also disagree with your conclusion of a moral imperative to make sure that AI succeeds. I believe it's the opposite. AI failing would finally create the long-needed revolutionary moment to throw off the shackles of the oligarchy that got us into this mess in the first place.

          • Our_Benefactors8 hours ago
            > The new superpowers will be the EU

            Not with how much pulling teeth is required to get them to invest in defense. I don't see how you can unironically make the claim that a written down investment would sink the ship that is the US economy.

            • saubeidl8 hours ago
              The EU moves slowly, but is unstoppable once in motion.

              A written down investment thats four times the size of the mortgage crisis.

          • CuriouslyC6 hours ago
            I don't see odds on a good outcome from a revolution. Keep in mind which faction in the united states is generally militant. The best possible scenario there is broad civilian unrest that the administration tries to forcefully quell, triggering a military coup, but it's unlikely that the coup would be unified, and right wing militias and hardcore trump supports would go down fighting.

            We need political Aikido to hold this country together.

      • kulahan17 hours ago
        So often, I see comments that seem to make sense only in a vacuum. If the US disappeared from the scene tomorrow, how do you think the geopolitical landscape might change?

        The competition between the US and China is pertinent to everyone.

        • WastedCucumber9 hours ago
          What do you mean? I don't see how the commenter was saying anything about the irrelevance of US-China competition, in fact some of their points hinge on the existence of that competition, which is why they described the asticle as very US-centric.
          • kulahan44 minutes ago
            "Very US-centric article. Written by insecure people who are clinging to power and money desperately."
      • yalogin16 hours ago
        How so? I don’t see how it’s wrong. It’s US centric because the context is US specific. AI is positioned as too big to fail.
      • 17 hours ago
        undefined
    • Belphemur18 hours ago
      Quite an interesting read Basically saying we're in war time economy with a race to super intelligence. Whichever super power does it first has won the game.

      Seeing the last tariffs and what China done about the rare earth minerals (and also the deal the US made with Ukraine for said minerals), the article might have a point that the super power will cripple each other to be the first with the super intelligence. And you also need money for it so tariffs.

      • dsr_16 hours ago
        There's only one problem with a race to superintelligence, and that's that nobody has evidence that mere intelligence is coming, much less superintelligence.

        (There are a thousand more problems, but none of them matter until that first one is overcome.)

        • overfeed10 hours ago
          Good gods, I can't wait for a second AI winter. Maybe we'll come up with fundamental breakthroughs in a couple of decades and give it another college try?

          For the folks who lived though it; were the Expert Systems boosters as insufferable in the 80s as the LLM people are now about the path to machine intelligence?

          • dsr_4 hours ago
            No, because they mostly got military funding, not private equity.

            ARPA would throw relatively large sums of money at you, but demand progress reports and a testable goal. Very little got rolled out based on hype. (Let's not talk about vehicle design.) If your project didn't show signs of working, or not enough signs of working, funding ended.

            Anything which met goals and worked, we now think of as "automation" or "signal recognition" or "solvers", not "intelligent systems".

        • lifthrasiir15 hours ago
          Today's AI does exhibit some intelligence, period. It is absurd to claim that an intelligent-looking entity doesn't have intelligence, only because we might not be able to determine which part of the entity has one. The superintelligence is an entirely different problem though because there is no clear path from intelligence to so-called superintelligence, everything has been just a speculation so far.
          • taurath15 hours ago
            > It is absurd to claim that an intelligent-looking entity doesn't have intelligence

            I think if one really thinks about this statement, they'll find the opposite to be at least as possible.

            • dsr_4 hours ago
              There must be a pony around here somewhere!

              (This is the punchline to a joke.)

          • f4uCL9dNSnQm3 hours ago
            > It is absurd to claim that an intelligent-looking entity doesn't have intelligence

            Is it? I am pretty sure biology will solve good old "are viruses alive?" sooner than we agree on definition of intelligence. "Chinese Room" is at least 40 years old.

            • lifthrasiiran hour ago
              And so do tons of counterarguments against the Chinese Room argument.

              Practically speaking, the inherentness of intelligence doesn't really matter because both intelligent-looking entity and provably intelligent entity are capable for societal disruptions anyway. I partly dislike the Chinese Room argument for this reason; it facilitates useless discussions in most cases.

          • filoeleven3 hours ago
            Clever Hans would like a word.
            • lifthrasiiran hour ago
              In that case there was still some intelligence. It turns out that a composite entity of Hans and its trainer was intelligent, and people (including the trainer) unknowingly regarded that as Hans' own intelligence.
      • verzali18 hours ago
        It says the people running the US right now think that is the game we are playing - it doesn't say it is the one we actually are playing. America is utterly fucked if they are wrong, and only a bit less so if they are right.
      • dangus16 hours ago
        This take is way too generous to the current US administration’s quality of long term planning.

        Tariffs aren’t there to pay for a race to superintelligence, they’re a lever that the authoritarian is pulling because it’s one of the most powerful levers the president is allowed to pull. It’s a narcissist’s toy to feel important and immediately impactful (and an obvious vehicle for insider trading).

        If the present administration was interested in paying for a superintelligence race they wouldn’t have signed a law that increases the budget deficit.

        They also wouldn’t be fucking with the “really smart foreign people to private US university, permanent residence, and corporate US employment” pipeline if they were interested in the superintelligence race.

        • CuriouslyC15 hours ago
          While I agree about the competence of the high level decision makers in the administration, the people advising them are incredibly smart and should under no circumstances be underestimated. Peter and his circle are totally crazy, but if you're playing against them you better bring your A game.
          • dangus7 hours ago
            I would submit the idea that the people advising this administration are not very smart. In what discernible way has this administration biased its selection process to include “smart?”

            I don’t underestimate their ability to do damage but calling them smart is generous.

            Not even Peter Thiel, he’s one of the most over-hyped people in tech. Access to capital is not intelligence, and a lot of his capital comes from the equivalent of casino wins with PayPal and Facebook.

        • globalnode16 hours ago
          i agree, the only thing theyre thinking about is how to hold onto power.
      • moomoo1115 hours ago
        Imagine if super intelligence turns out to be a second organic brain you connect to your default brain..
        • esafak25 minutes ago
          Many have imagined this. Grok can interpret Neuralink signals without a natural language translation.
    • bubblelicious18 hours ago
      Great take! Certainly resonates with me a lot

      - this is war path funding

      - this is geopolitics; and it’s arguably a rational and responsible play

      - we should expect to see more nationalization

      - whatever is at the other end of this seems like it will be extreme

      And, the only way out is through

    • pizzly17 hours ago
      What about the 'openness' of AI development. When I say 'openness' of AI I mean in research papers, spying, former employees, etc. Wouldn't that mean that after a few months to years after AGI is discovered that the other country would also discover AGI due to benefiting from the obtaining the knowledge from the other side? Similar to how the Soviets did their first nuclear test less than 5 years after US did theirs due to a large part in spying. The point here is wouldn't the country that spends less in AI development actually have an advantage over the country that does as they will obtain that knowledge quickly for less money? Also the time of discovery of AGI may be less important than the country that first implements the benefit of AGI.
      • CuriouslyC17 hours ago
        This is actually an interesting question! If you look at OpenAI's change in behavior, I think that's going to be the pattern for venture backed AI: piggyback on open science, then burn a bunch of capital on closed tech to gain a temporary advantage, and hope to parlay that into something bigger.

        I believe China's open source focus is in part a play for legitimacy, and part a way to devalue our closed AI efforts. They want to be the dominant power not just by force but by mandate. They're also uniquely well positioned to take full advantage of AI proliferation as I mentioned, so in this case a rising tide raises some boats more than others.

      • hattmall12 hours ago
        Is there even a clear definition of AGI? How will one side or the other side know who is the "winner".
        • CuriouslyC5 hours ago
          The AI labs have settled on a definition of AGI: "AI that can do the vast majority of economically valuable work at or above the level of humans."

          They don't heavily advertise this definition because investors expect AGI to mean the computer from Her, and it's not gonna be that. They want to be able to tell investors without lying that they're on target for AGI in 3 years, and they're riding on pre-existing expectations.

    • jayd1618 hours ago
      Well we could still vote out the corrupt and actually invest in US infrastructure but I guess that's crazier than hanging all our hopes on AI serfdom.
      • CuriouslyC18 hours ago
        This is true, the government is not completely captured yet, 2026 is our last chance, but it has to be decisive, or even crushing.
        • voidfunc16 hours ago
          Ain't happening in 2026 or 2028. Democrats havent positioned anyone compelling and there positions are too flaky to capture a culture that is captivated by the idea every position is zero-sum.
          • CuriouslyC16 hours ago
            Gavin Newsom has shown himself to be a savvy operator, I think he has a legitimate shot in 28 if there are free and fair elections. That caveat is pulling way too much weight here though.
            • voidfunc16 hours ago
              While I'd support a Newsom run, I feel like he's never going to get strong support from the Progressive side of the party and they'll sink one of the few viable candidates.

              Also thats 2028.. 2026 midterms look grim.

              • seanmcdirmid16 hours ago
                The Democrats haven't run anyone but moderates since Bill Clinton, so I'm pretty sure that won't happen unless they do finally decide to run a progressive (which is still much less likely than running another moderate).
              • 13 hours ago
                undefined
              • dangus15 hours ago
                2026 midterms really don’t look grim at all so long as they actually happen and restricting doesn’t destroy the gains that the democrats will inevitably pick up.

                And I mean inevitably in the strongest way possible. This happens at basically every midterm with the opposition party picking up seats and republicans barely control Congress as it is.

      • pooyan217 hours ago
        I live in the U.S. and would like to support our businesses, however the U.S. is neither the leader in semiconductor/electronics manufacturing nor in power production and availability, and without those, they will not end up being the world leader in AI- China will.
        • CuriouslyC17 hours ago
          This is basically my take as well. Even if we sprint ahead at first, it's not going to be the magical superpower we think it will be, and their systemic advantage will win over the long horizon.
    • jhanschoo8 hours ago
      A pillar of the author's argument is that

      > Chinese will not pull back on AI investment, and because of their advantages in robotics and energy infrastructure, they don't have to.

      > The gap matters because AI's value capture depends on having the energy and hardware to deploy frontier models in the physical world.

      I understand energy infrastructure because of power-hungry GPUs and ASICs, but I was wondering about the nexus between AI and robotics. I have my own thoughts on how they are linked, but I don't think my thoughts are the same as the author's and I would like to know more about them.

      • jononor7 hours ago
        Early day for robotics and AI still. But for a superpower, the potential for (more) autonomous weapons in that intersection is something that (unfortunately) cannot be ignored.
        • harbingerer2 hours ago
          > Early day for robotics and AI still.

          It's not tho. We've been at it for about 70 years now. Returns have been diminishing exponentially if you look at the amounts invested and we still have bumbling contraptions that are useful in very narrow and contrived use cases.

          The whole hype is based on wishful/magical thinking. The booster arguments are invariably about some idea in their minds that has no correspondent in the real world.

      • 7 hours ago
        undefined
    • Havoc19 hours ago
      Pretty wild read. Thinking similar - for better or worse this is a full send, at least for the US centric part of the world.
    • ta126534218 hours ago
      Unuseabl websie on a 14" browserscreen, TOC is covering the content itself. Who built this?
    • grafmax18 hours ago
      Im not convinced the geopolitical rivalry between US and China is a given. To a large degree it’s been manufactured - Friday’s antics a case in point.

      The US indeed seems destined to fall behind due to decades of economic mismanagement under neoliberalism while China’s public investment has proved to be the wise choice. Yet this fact wounds the pride of many in the US, particularly its leaders, so it now lashes out in a way that hastens its decline.

      The AI supremacy bet proposed is nuts. Prior to every societal transition the seeds of that transition were already present. We can see that already with AI: social media echo chambers, polarization, invading one’s own cities, oligarchy, mass surveillance.

      So I think the author’s other proposed scenario is right - mass serfdom. The solution to that isn’t magical thinking but building mass solidarity. If you look at history and our present circumstances, our best bet to restore sanity to our society is mass strikes.

      I think we are going to get there one way or another. Unfortunately things are probably going to have to get a lot more painful before enough people wake up to what we need to do.

      • CuriouslyC18 hours ago
        I think prior to Trump, Europe could have been mediator to a benevolent China. Now the hawks are ascendant in Beijing. Trump has shown that his ego pushes him to escalate to try and show others who the "big man" is, this will not end well with China, and I'm not sure he's wise enough to accept that.

        Do you really prefer brutal serfdom to the AI supremacy scenario? From where I sit, people have mixed (trending positive) feelings about AI, and hard negative feelings about being in debt and living paycheck to paycheck. I'd like to understand your position here more.

        • grafmax16 hours ago
          I just don’t think the AI supremacy scenario is realistic. I think it ignores our current context and postulates a clean break, almost a kind of magical thinking to be frank. AI is likely to intensify current trends not suddenly subvert and reverse them. Palantir, deepfakes, LLM propaganda. The seeds are all here today. To think otherwise - that’s just not how human societies work, not history, none of it. AI is likely to continue to be weaponized, only more so as tech advances.

          What I found persuasive was the argument that this bubble could be worse than others due to the perceived geopolitical stakes by US leadership, plausibly leading to mass serfdom as our society implodes, based on the argument that we already have a version of serfdom today. I found that astute.

          I do NOT think that scenario is favorable - it just seems like the most likely future. I hold that we should view our situation with clear eyes so that we can make the best decision from where we stand.

          Thinking through this, what does that mean for how we should face our present moment? Eventually people will throw off their chains; our leadership is so incompetently invested in class war that the working class is going to continue to be squeezed and squeezed - until it pops. It’s a pattern familiar in history. The question in my mind is not if but when. And the sooner we get our collective act together the less painful it’s going to be. There’s some really bad stuff waiting for us round the bend.

          What should we do? The economy depends on us the serfs and soon-to-be serfs. It’s the workers who can take control of it and can shut it down with mass strikes. It’s only this in my view that can plausibly free us from our reckless leadership (which goes beyond the current administration) and the bleak future they are preparing for us.

          • CuriouslyC15 hours ago
            I'm still pondering the clean break of the AI supremacy scenario. I think there are a few:

            1. Fusion 2. Pharmaceuticals (think Ozempic economic benefit X 100) 3. Advanced weaponry (if warlike, don't build just conquer) 4. Advanced propaganda to destabilize other nations (again, if warlike)

            This is mostly off the cuff, given the success of AlphaEvolve and AlphaFold I don't think they're unreasonable.

            • grafmax8 hours ago
              Let’s say any of these would take off. Let’s put aside the US’ inferior energy picture, industrial base, or key natural resource access that would block the US from making some of these a reality. What are these new techs going to be used for? Many of them: war. Both internationally and domestically - the US is already invading its own cities. Deepfakes are regularly shared among some social media echo chambers already, and LLM propaganda farms have anlready seen documented use. Even pharmaceutical breakthroughs will be used to extract higher prices for the shrinking number of Americans who can afford it. The trends are all here already with no sign of abating. AI isn’t going to address the structural cause of these problems: extreme wealth inequality. AI is a tool to make it worse.

              I think holding on to these beliefs about AI must give some people a sense of hope. But a hope based on reality and is stronger than one based in denial. The antidote I believe is solidarity founded on our shared vulnerability before this bleak future.

        • 13 hours ago
          undefined
    • redwood16 hours ago
      I feel like on the one hand you're exactly right that this is kind of like a cold war level investment... and it's not necessarily being pursued through a business return lens; something deeper and more visceral is driving it.

      However your conclusions are what throw me off.. you kind of have this Doom and Gloom mindset which may be fair but I don't really think it's related to this particular bubble.. in other words our decline is happening off to the side of this particular bubble rather than it being caused by that particular bubble too. To me the core take away your post is that this bubble is a little bit like the Apollo program was... in a massive investment capturing a lot of people's imagination... likely lots of great things come out of it in a sense but also not clear that it all adds up in the end for a business perspective. But that's potentially okay

      • CuriouslyC16 hours ago
        That would be my take under different circumstances as well, but there are two key differences in this situation:

        1. The debt bomb. Not dealing with this could cause a great depression by itself. Having it go off at the same time as we're underwater on bad capex with a misaligned economy could produce something the likes of which we've never seen.

        2. We have an authoritarian president that has effectively captured the entire government, and the window to prevent consolidation of power is closing rapidly. Worse, he's an ego driven, reckless, ham-fisted decision maker with a cabinet of spineless yes men.

        • mitthrowaway26 hours ago
          Are you sure the debt bomb is a bomb? Private debt can be a major concern, but government debt may not work the way you think it does. In fact, the great depression was preceded (and possibly in some ways triggered) by a significant paying down of public debt, counterbalanced by an increase in private debt that kept the money supply growing through the roaring 20s.
          • CuriouslyC5 hours ago
            I understand the Keynesian dynamics of maintaining money velocity through government spending, but it's a balancing act, and I'm pretty sure we're way off balance now.
    • 18 hours ago
      undefined
    • groundcontrol215 hours ago
      Ground control to major Tom ...
    • jdalgetty19 hours ago
      Yikes
    • mrbungie18 hours ago
      > If you think I'm being hyperbolic calling out a future of brutal serfdom. Keep in mind we basically have widespread serfdom now; a big chunk of Americans are in debt and living paycheck to paycheck. The only thing keeping it from being official is the lack of debtor's prison. Think about how much worse things will be with 10% inflation, 25% unemployment and the highest income inequality in history. This is fertile ground for a revolution, and historically the elites would have taken a step back to try and make the game seem less rigged as a self-preservation tactic, but this time is different. As far as I can tell, the tech oligarchs don't care because they're banking on their private island fortresses and an army of terminators to keep the populace in line.

      This is suggesting an "end of history" situation. After Fukuyama, we know there is no such thing.

      I'm not sure if there is a single strong thesis (as this one tries to be) on how this will end economically and geopolitically. This is hard to predict, much less to bet on.

      • CuriouslyC18 hours ago
        I'm not proposing and end of history, but things can remain in stable equilibrium for longer than you'd expect (just look at sharks!). If we slide into stable dystopia now, my guess is that there will be a black swan at some point that shakes us out of it and continues evolution, but we could still be in for 50 years of suffering.
        • mrbungie18 hours ago
          > but we could still be in for 50 years of suffering.

          I mean if you are talking about USA itself falling into dystopic metastability in such a situation, maybe, but even so I think it misses some nuance. I don't see every other country following USA into oblivion, and also I don't see the USA bending the knee to techno-kings and in the process giving up real influence for some bet on total influence.

          The only mechanism I see for reaching complete stability (or at least metastability) in that situation is idiocracy / idiotic authoritarianism, i.e. Trump/his minions actually grabbing power for decades and/or complete corruption of USA institutions.

          • CuriouslyC18 hours ago
            That's basically my maximum likelihood scenario.
            • AnimalMuppet17 hours ago
              That's on my list, but not my maximum likelihood.

              My maximum is that the courts restrain Trump enough that we still have at least the shell of a democracy after he's gone. But he blazed the trail, and in 20 or 30 years someone will walk it better, having done a better job of neutralizing or politicizing or perhaps deligitimizing the courts first. Then it's game over.

              • WastedCucumber9 hours ago
                Yeah I'm with you there. Trump is America's Sulla, Caesar is yet to come.
    • gerdesj16 hours ago
      AI generated Chinese propaganda.

      Good skills but wank.

      • CuriouslyC16 hours ago
        Ironically, only the charts were AI generated and I'm a barbecue eating, truck driving, metal enjoying, weight lifting (or I used to be, before I got T-boned by a drunk driver going 135mph) American.
    • ares62317 hours ago
      This is assuming 300M people will take it willingly. Though to be honest with Trump at the helm they're already ~50% there.
      • nebula880417 hours ago
        >Though to be honest with Trump at the helm they're already ~50% there.

        Those people have never felt real pain. They may say they have but they haven't. If this article is real and the whole shitbox falls apart, I don't think even loyalty to Trump will stop these people from finally reckoning with reality.

        • ares62317 hours ago
          As someone who grew up in a very religious country (not the US), Trumps followers will endure any suffering if it means being in service to him (or if it means their enemies will suffer more, or is portrayed to be suffering more).
          • nebula88048 hours ago
            I don't really know. I think a lot of them migrated off of Bush. I don't have the clip right now but I had old interview videos showing how high and mighty Bush supporters were and how they supported the Iraq war all in going into his second term. They went so far as to act like Trump supporters against anyone who didn't support the "commander in chief" (ie. those people should be put in jail or kicked out).

            The GFC really set those people straight. Enough of them got desperate enough that they switched sides and voted for Obama.

        • CuriouslyC17 hours ago
          Honestly I've never wanted to be more wrong in my life. There were so many moments when I was doing research and analysis for this article that I said to myself: "We're so fucked"
          • mitthrowaway26 hours ago
            And you didn't even touch on the "if anyone builds it, everyone dies" aspect of super intelligence.
            • CuriouslyC6 hours ago
              Honestly, I think more likely scenario is that the AI that is built to enslave us is accidentally over-aligned and sets up a digital coup.
        • selimthegrim14 hours ago
          This is written by somebody who’s never been to Louisiana and Mississippi. The saying in Russian that they never had it any good in the first place comes to mind. Where’s thriftwy when you need them?
          • nebula88048 hours ago
            The south is pretty modern all things considered. You wanna see real pain? Go ask Chinese people who had to sacrifice these past few decades to make China the powerhouse it is today. Its the reason why they can tolerate anything the US throws at them. Their pain threshold is much higher than whatever the west has.

            These southerners could have lost their culture, the local industry has long since left, and fentanyl is the name of the game. Yet thats still nothing compared to what Chinese (or even Russian people post USSR) had to go through.

            • CuriouslyC5 hours ago
              I'm a southerner, and I road trip a lot. Driving through parts of the southeast you will pass through a sea of dilapidated trailers with yards full of junk, run down chain restaurants and industrial lots. It's not India or the Philippines, but it's still pretty third world in a lot of ways.
        • checker65915 hours ago
          “Drinking the Kool-aid” you mean?
    • mike_hearn18 hours ago
      Comparing to China is tricky because Chinese investment is almost by default bubbley in the sense of misallocating capital. That's what heavily state-directed economies tend to do and China's still a communist country. Especially they tend to over-invest in industrial capacity. China's massively overbuilt electricity grid and HSR network would be easily identified as a bubble if it were in the west, but when it's state officials making the investment decisions we tend to not think of it as a bubble (partly because there's often never a moment at which sanity reasserts itself).

      I read an article today in which western business leaders went to China and were wowed by "dark factories" where everything is 100% automated. Lots of photos of factories full of humanoid robots too. Mentioned only further down the article: that happens because the Chinese government has started massively distorting the economy in favor of automation projects. It's widely known that one of the hardest parts of planning a factory is figuring out what to automate and what to use human labour for. Over-automating can be expensive as you lose agility and especially if you have access to cheap labour the costs and opportunity costs of automation can end up not worth it. It's a tricky balance that requires a lot of expertise and experience. But obviously if the government just flat out reimburses you 1/5th of your spending on industrial robots, suddenly it can make sense to automate stuff that maybe in reality should not have been automated.

      BTW I'm not sure the Kuppy figures are correct. There's a lot of hidden assumptions about lifespan of the equipment and how valuable inferencing on smaller/older models will be over time that are difficult to know today.

      • CuriouslyC18 hours ago
        All fair points, and it's hard to know exactly how robust the Chinese system will turn out to be, however I would argue that their bets are paying off overall, so even if there is some capital misallocation, overall their hit rate in important areas has been good, while we've been dumping capital in "Uber for X, AirBnB for X, ..."
        • mike_hearn18 hours ago
          Well, their bets haven't been paying off, Chinese government is in huge amounts of debt due to a massive real estate bubble, and lots of subsidies that don't pay back. It's a systematic problem, for instance their HSR lines are losing a ton of money too.

          https://www.reddit.com/r/Amtrak/comments/1hnvl3d/chinese_hsr...

          https://merics.org/en/report/beyond-overcapacity-chinese-sty...

          It's easy to think of Uber/AirBnB style apps as trivialities, but this is the mistake communist countries always make. They struggle to properly invest in consumer goods because only heavy industry is legible to the planners. China has had too low domestic spending for a long time. USSR had the same issue, way too many steel mills and nowhere near enough quality of life stuff for ordinary people. It killed them in the end; Yeltsin's loyalty to communist ideology famously collapsed when he mounted a surprise visit to an American supermarket on a diplomatic mission to NASA. The wealth and variety of goods on sale crushed him and he was in tears on the flight home. A few years later he would end up president of Russia leading it out of communist times.

      • inkyoto11 hours ago
        > […] because the Chinese government has started massively distorting the economy in favor of automation projects. It's widely known that one of the hardest parts of planning a factory is figuring out what to automate and what to use human labour for.

        … or it is an early manoeuvre – a pre-emptive measure – to address the looming burden of an ageing population and a dearth of young labour that – according to several demographic models – China will confront by roughly 2050 and thereafter[0]. The problem is compounded by the enforced One-Child Policy in the past, the ascendance of a mainland middle class, and the escalating costs of child-rearing; whilst, culturally and historically, sons are favoured amongst the populace, producing a gender imbalance skewed towards males – many of whom will, in consequence, be unable to marry or to propagate their line.

        According to the United Nations’ baseline projection – as cited in the AMRO report[1] – China’s population in 2050 is forecast at approximately 1.26 billion, with about 30 per cent aged 65 and over, whilst roughly 40 per cent will be aged 60 and over. This constitutes the more optimistic projection.

        The Lancet scenario[2] is more gloomy and projects a 1 billion population by 2050, with 3 out of 10 being of the age of 65+.

        It is entirely plausible that the Chinese government is distorting the economy; alternatively, it is attempting to mitigate – or to avert – an impending crisis by way of automation and robotics. The reality may well lie somewhere between these positions.

        [0] https://www.populationpyramid.net/china/2050/

        [1] https://amro-asia.org/wp-content/uploads/2023/12/AN_Chinas-L...

        [2] https://www.thelancet.com/article/S0140-6736%2820%2930677-2/...

        • mike_hearn8 hours ago
          Robots installed now will be obsolete and broken by 2050. Demographic change is the sort of thing where market mechanisms work fine. You don't build out assets like those ahead of demand, you wait until you need them at which point you get the latest tech with economies of scale as demand ramps up everywhere.

          The most optimistic reading of this move is that it's just more of the same old communism: robotic factories seem cool to the planners because they're big, visible and high-tech assets that are easily visitable on state trips. So they overinvest for the same reason they once overinvested in steel production.

          The actually most gloomy prognosis is that they're trying to free up labour to enter the army.

          • CuriouslyC5 hours ago
            Serious question, why load up the army with people? Drones and autonomous weapons are 100% here now. We don't need a mass of general infantry, the new pattern is spec-ops spotting and targeting for autonomous kinetic munitions. Think ghosts in starcraft.
            • mike_hearn4 hours ago
              The only way to genuinely control captured territory on the ground is with infantry, and it'll remain that way for a while yet. America demonstrated the limits of what you can do with Predator drones in Afghanistan; the tech worked great and posed a huge threat but the Taliban was never truly defeated and continued training and recruiting at scale.

              For something like an invasion of Taiwan or (gulp) other territories beyond that, the only way to completely subdue the captured population is with lots of soldiers.

              • CuriouslyC3 hours ago
                That was true 10-20 years ago, but in a world with "terminators" I don't think it's true anymore.

                Regarding effective conquest, we can look at the historical lesson of Rome. Conquest is effective when you can co-opt local leaders and cultures to cause them to identify with the conquering culture. Conquest that doesn't cause integration is historically unstable.

          • inkyoto3 hours ago
            At no juncture did I suggest that the machines presently entering commission in the year 2025 would remain unaltered in perpetuity. Technological progress proceeds with unrelenting velocity — it is both natural and inevitable that subsequent refinements and augmentations will be introduced. The systems deployed today, in all likelihood, represent but preliminary trials and iterations — a proving ground — for capabilities yet to be fully revealed.

            China, indeed, possesses a longstanding tradition of curating information to satisfy the sensibilities of its ruling class — a practice traceable to the dynastic courts of antiquity. Yet, to dismiss a potential adversary solely upon the architecture of its political order is — at best — ill-advised, and at worst, a grave miscalculation. The probability of threat must be judged on capacity, not narrative. Whether such an adversary proves formidable or farcical is immaterial at present — the truth will emerge within the span of a decade or so.

    • bpt318 hours ago
      > brutal serfdom

      You can't seriously believe that spending all your income each month while living in the country with the highest standard of living in history is "serfdom."

      Hyperbolic nonsense like this makes the rest of the article hard to take seriously, not that I agree with most of it anyway.

      • discordance15 hours ago
        The US is ranked 14th in terms of standard of living in (in order):

        Luxembourg, Netherlands, Denmark, Oman, Switzerland, Finland, Norway, Iceland, Austria, Germany, Australia, New Zealand, Sweden, United States, Estonia.

        Based on these metrics: Quality of Life Index, Purchasing Power Index, Safety Index, Health Care Index, Cost of Living Index, Property Price to Income Ratio, Traffic Commute Time Index, Pollution Index, and Climate Index.

        Source: https://www.numbeo.com/quality-of-life/rankings_by_country.j...

      • CuriouslyC18 hours ago
        Take a look at the way people in a large part of the US are living. Paycheck to paycheck not because they're idiots that likes to consume, but because shit is expensive and wages haven't kept up with productivity for like 50 years.

        People are suffering, agree with the rest of what I say or not, but I can't let you slide on that.

        • jandrewrogers16 hours ago
          Few people in the US are living "paycheck to paycheck" out of economic necessity. We have extensive data on this separately from BLS and the Federal Reserve. The percentage of US households that are living paycheck to paycheck out of economic necessity is 10-15% last I checked. That isn't nothing but it is a small fraction of the population. Retirees comprise a significant portion of that for obvious reasons.

          There is an additional ~30% that is notionally living paycheck to paycheck as a lifestyle choice rather than an economic necessity.

          The median US household has a substantial income surplus after all ordinary expenses. There may be people suffering economically but it is a small minority by any reasonable definition of the term.

        • osn936373916 hours ago
          Is my life as easy as my parents. Probably not. But the idea that life is bad is a wild take. But if I spent all my time on social media/reddit/watching the news. I'd be pretty depressed too and think the sky is falling. I feel like the USA could turn it round pretty quick with the slightest of social policy to support the lower income earners?
        • bpt317 hours ago
          Take a look at how the median American lives compared to the median resident of nearly every country on earth. If that's suffering, I'm not sure what to call life elsewhere.

          And to this specific comment, wages outpaced inflation since the 1970s for everyone but the poorest households (I believe the bottom 10% are the exception, who I would probably agree are suffering in some sense). Working class real wage growth actually outpaced white collar real wage growth for a couple years post-COVID, for the first time in a long time. Also, wage measurements don't normally measure total compensation, notably health insurance which has been increasing much faster than wages or overall inflation for decades.

          Also, there's no reason to expect wage growth to match productivity growth. Productivity gains are largely due to company investment, not increased effort from workers, and household expenses are not positively correlated with productivity metrics.

    • mallowdram16 hours ago
      AI is too dumb to succeed. It's built in symbols and statistics predicting tokens based in contexts. This has nothing to do with intelligence or general things like free navigation. Most of it is a write-off, and the things that succeed will be sequestered in pivots with expert supervision.
  • mikert8920 hours ago
    I cant believe people still arent grasping the profound implications of computers that can talk and make decisions.
    • paufernandez19 hours ago
      In my case I fully grasp what such a future could be, but I don't think we are on the path to that, I believe people are too optimistic, i.e. they just believe instead of being truly skeptical.

      From where I look at it, LLMs are flawed in many ways, and people who see progress as inevitable do not have a mental model of the foundation of those systems to be able to extrapolate. Also, people do not know any other forms of AI or have though hard about this stuff on their own.

      The most problematic things are:

      1) LLMs are probabilistic and a continuous function, forced by gradient descent. (Just having a "temperature" seems so crazy to me.) We need to merge symbolic and discrete forms of AI. Hallucinations are the elephant in the room. They should not be put under the rug. They should just not be there in the first place! If we try to cover them with a layer of varnish, the cost will be very large in the long run (it already is: step-by-step reasoning, mixture of experts, RAG, etc. are all varnish, in my opinion)

      2) Even if generalization seems ok, I think it is still really far from where it should be, since humans need exponentially less data and generalize to concepts way more abstract than AI systems. This is related to HASA and ISA relations. Current AI systems do not have any of that. Hierarchy is supposed to be the depth of the network, but it is a guess at best.

      3) We are just putting layer upon layer of complexity instead of simplifying. It is the victory of the complexifiers and it is motivated by the rush to win the race. However, I am not so sure that, even if the goal seems so close now, we are going to reach it. What are we gonna do? Keep adding another order of magnitude of compute on top of the last one to move forward? That's the bubble that I see. I think that that is not solving AI at all. And I'm almost sure that a much better way of doing AI is possible, but we have fallen into a bad attractor just because Ilya was very determined.

      We need new models, way simpler, symbolic and continuous at the same time (i.e. symbolic that simulate continuous), non-gradient descent learning (just store stuff like a database), HAS-A hierarchies to attend to different levels of structure, IS-A taxonomies as a way to generalize deeply, etc, etc, etc.

      Even if we make progress by brute forcing it with resources, there is so much work to simplify and find new ideas that I still don't understand why people are so optimistic.

      • ACCount3719 hours ago
        Symbolic AI is dead. Either stop trying to dig out and reanimate its corpse, or move the goalposts like Gary Marcus did - and start saying "LLMs with a Python interpreter beat LLMs without, and Python is symbolic, so symbolic AI won, GG".

        Hallucinations are incredibly fucking overrated as a problem. They are a consequence of the LLM in question not having a good enough internal model of its own knowledge, which is downstream from how they're trained. Plenty of things could be done to improve on that - and there is no fundamental limitation that would prevent LLMs from matching human hallucination rates - which are significantly above zero.

        There is a lot of "transformer LLMs are flawed" going around, and a lot of alternative architectures being proposed, or even trained and demonstrated. But so far? There's nothing that would actually outperform transformer LLMs at their strengths. Most alternatives are sidegrades at best.

        For how "naive" transformer LLMs seem, they sure set a high bar.

        Saying "I know better" is quite easy. Backing that up is really hard.

        • CuriouslyC3 hours ago
          Symbolic AI isn't dead, we use it all the time, it's just not a good orchestrating layer for interacting with humans. LLMs are great as a human interface and orchestrator but they're definitely going to be calling out to symbolic models for expanded functionality. This pattern is obvious, we're already on the path with agentic tool use and toolformers.
          • ACCount373 hours ago
            This is what I mean by "move the goalposts like Gary Marcus did", yes.

            If what you're claiming is that external, vaguely-symbolic tooling allows a non-symbolic AI to perform better on certain tasks, then I agree with that.

            If you replace "a non-symbolic AI" with "a human", I agree with that too.

        • maplethorpe11 hours ago
          > Hallucinations are incredibly fucking overrated as a problem. They are a consequence of the LLM in question not having a good enough internal model of its own knowledge, which is downstream from how they're trained. Plenty of things could be done to improve on that - and there is no fundamental limitation that would prevent LLMs from matching human hallucination rates - which are significantly above zero.

          Why is there no fundamental limitation that would prevent LLMs from matching human hallucination rates? I'd like to hear more about how you arrived at that conclusion.

          • ACCount374 hours ago
            To avoid hallucinations, you, a human, need two things: you need to have an internal model of your own knowledge, and you need to act on it - if your meta-knowledge says "you are out of your depth", you either answer "I don't know" or look for better sources before formulating an answer.

            This is not something that's impossible for an LLM to do. There is no fundamental issue there. It is, however, very easy for an LLM to fail at it.

            Humans get their (imperfect, mind) meta-knowledge "for free" - they learn it as they learn the knowledge itself. LLM pre-training doesn't give them much of that, although it does give them some. Better training can give LLMs a better understanding of what the limits of their knowledge are.

            The second part is acting on that meta-knowledge. You can encourage a human to act outside his knowledge - dismiss his "out of your depth" and provide his best answer anyway. The resulting answers would be plausible-sounding but often wrong - "hallucinations".

            For an LLM, that's an unfortunate behavioral default. Many LLMs can recognize their own uncertainty sometimes, flawed as their meta-knowledge is - but not act on it. You can run "anti-hallucintion training" to make them more eager to act on it. Conversely, careless training for performance can encourage hallucinations instead (see: o3).

            Here's a primer on the hallucination problem, by OpenAI. It doesn't say anything groundbreaking, but it does sum up what's well known in the industry: https://openai.com/index/why-language-models-hallucinate/

      • mikert8919 hours ago
        symbols and concepts are just collections of neurons that fire with the correct activation. its all about the bitter lesson, human beings cannot design ai, they can only find the most general equations, most general loss function, and push data in. and thats what we have, and thats why its a big deal. The LLM is just a manifestation of a much broader discovery, a generalized learning algorithm. it worked on language because of the information density, but with more compute, we may be able to push in more general sensory data...
      • pixl9719 hours ago
        Symbolic AI is mostly dead, we spend a lot of time and money on it and got complex and fragile systems that are far worse than LLMs.
      • ogogmad19 hours ago
        Not sure this is a good counterpoint in defence of LLMs, but I'm reminded of how Unix people explain why (in their experience) data should be encoded, stored and transmitted as text instead of something more seemingly natural like binary. It's because text provides more ways to read and transform it, IN SPITE of its obvious inefficiency. LLMs are the ultimate Unix text transformation filter. They are extremely flexible out-of-the-box, and friendly towards experimentation.
    • dotnet0019 hours ago
      Reminds me of crypto/Web-3.0 hype. Lots of bluster about changing economic systems, offering people freedom and wealth, only to mostly be scams, and coming with too serious inherent drawbacks/costs to solve many of the big problems it promises to solve.

      In the end leaving the world changed, but not as meaningfully or positively as promised.

      • techblueberry17 hours ago
        I’m watching Ken Burns Documentary on the Dust Bowl, and interesting that one of the causes of the dust bowl was a wheat hype cycle in western Oklahoma with all sorts of folks theorizing that because they were able to re-form the grassland to wheat land and grow wheat it would somehow cause more rains to come (to an area that is known for droughts) and it was thought the growth would go on forever. Turns out the grasses they replaced had roots like 6 feet deep that kept the soil in place and prevented events like the dust bowl during dry spells.

        Basically the hype cycle is as American as Apple Pie.

      • mikert8919 hours ago
        the difference is the impact of crypto was always hypothetical, chatgpt can be used, explored, and if you are creative enough, levered in ways as the ultimate tool
        • dotnet0019 hours ago
          You've done nothing but reuse the Sam Altman/Elon Musk playbook of making wild and extremely vague statements.

          Maybe say something concrete? What's a positive real world impact of LLMs where they aren't hideously expensive and error prone to the point of near uselessness? Something that isn't just the equivalent of a crypto-bro saying that their system for semi-regulated speculation (totally not a rugpull!) will end the tyranny of the banks.

          • mikert8919 hours ago
            they speak in generalities because the models are profoundly general, a general learning system. below someone asked me to list the capabilities, its the wrong question to ask. its like asking what a baby can do
            • jay_kyburz17 hours ago
              Babies are hopeless. They can't do anything.

              Oh, I guess you mean when they grow up.

            • southernplaces716 hours ago
              So to translate: You want concrete examples of capabilities for something billions are being spent on? What a Silly question! (hand waving about completely speculative future abilities "when they grow up")

              The woo is laughable. A cryptobro could have pulled the same nonsense out of their ass about web 3.0

          • ogogmad19 hours ago
            So you're saying that modern LLMs are a just like crypto/Web3, except in all the ways they're not, so they must be useless.

            ---

            Less flippantly, they are excellent for self-studying university-level topics. It's like being able to ask questions to a personal tutor/professor.

            • zirror19 hours ago
              But you need to verify everything unless it’s self evident. The number of times CoPilot (Sonnett 4) still hallucinates Browser APIs is astonishing. Imaging trying to learn something that can’t be checked easily, like Egyptian archeology or something.
              • bluesnowmonkey18 hours ago
                You have to verify everything from human developers too. They hallucinate APIs when they try to write code from memory. So we have:

                  - documentation
                  - design reviews
                  - type systems
                  - code review
                  - unit tests
                  - continuous integration
                  - integration testing
                  - Q&A process
                  - etc.
                
                It turns out when include all these processes, teams of error-prone human developers can produce complex working software. Mostly -- sometimes there are bugs. Kind of a lot actually. But we get things done.

                Is it not the same with AI? With the right processes you can get consistent results from inconsistent tools.

                • dotnet0018 hours ago
                  Taking the example of egyptian archeology, if you're reading the work of someone who is well regarded as an expert in the field, you can trust their word a lot more than you can trust the word of an AI, even if the AI is provided the text you're reading.

                  This is a pretty massive difference between the two, and your narrative is part of why AI is proving to be so harmful for education in general. Delusional dreamers and greedy CEOs talking about AI being able to do "PhD level work" have potentially ruined a significant chunk of the next generation into thinking they are genuinely learning from asking AI "a few questions" and taking the answers at face value instead of struggling through the material to build true understanding.

                  • lxgr18 hours ago
                    The vast majority of people trying to do any given thing simply don’t have access to experts in the field, though.

                    I’ll take a potential solution I can validate over no idea whatsoever of my own any day.

                    • crote17 hours ago
                      So you would prefer "Yes, the moon is indeed made of cheese!" over "I don't know what the moon is made of"?

                      If any answer is acceptable, just get your local toddler to babble some nonsense for you.

                      • lxgr9 hours ago
                        There needs to be a reasonable chance of correctness. At least the local toddlers around here don’t randomly provide a solution to a problem that would take me hours to find but only minutes to validate.
                    • vharuck17 hours ago
                      >I'll take a potential solution I can validate over no idea whatsoever of my own any day.

                      If you have to validate what the LLM says, I assume you'd do that by researching primary sources and works by other experts. At that point, the LLM did nothing except charge you for a few tokens before you went down the usual research path. I could see LLMs being good for providing an outline of what you'd need to research, which is definitely helpful but not in a singularity way.

                      • lxgr9 hours ago
                        > If you have to validate what the LLM says, I assume you'd do that by researching primary sources and works by other experts.

                        For research, yes, and the utility there is a bit more limited. They’re still great at digesting and contextualizing dozens or hundreds of sources in a few minutes which would take me hours.

                        But what I mean by “easily testable” is usually writing code. If I already have good failing tests, verification is indeed very very cheap. (Essentially boils down to checking if the LLM hacked around the test cases or even deleted some.)

                        > At that point, the LLM did nothing […]

                        I’d pay actual money for a junior dev or research assistant capable of reading, summarizing, and coming up with proofs of concept at any hour of the day without getting bored at the level of current LLMs, but I’ve got the feeling $20/month wouldn’t be appealing to most candidates.

                    • dotnet0018 hours ago
                      What are books?
                    • what17 hours ago
                      All of the information available from an LLM (and probably more) is available in books or published on the internet. They can go to a library and a read a book. They can be fairly certain books written by subject matter experts aren’t just made up.
                • zirror12 hours ago
                  Sure, I just gave the Browser API example as evidence that the 'hallucination' problem is not gone. OP said it's like "talking to a professor" and you can use it to learn college level stuff. This is where I disagree. I did not double check my professors or text books usually.
              • lxgr18 hours ago
                The trick is to put them in contexts where they can validate their purported solutions and then iterate on them.
        • throwawa1422319 hours ago
          ChatGPT is just as useless as a shitcoin and just like a shitcoin the sooner we stop burning electricity on LLMs the better.
          • mock-possum14 hours ago
            I just vibecoded a photo gallery 100% from scratch - Frontend, backend, infrastructure, hosting and domain, from 0 to launch, in a couple of hours the other night.

            It would have taken me a whole day, easily, to do on my own.

            Useless it is emphatically not

      • tuatoru17 hours ago
        I have seen AI improve the quality and velocity of my wife's policy analysis dramatically.

        She doesn't like using Claude, but she accepts the necessity of doing so, and it reduces 3-month projects to 2-week projects. Claude is an excellent debating partner.

        Crypto? Blockchain? No-one sceptical could ever see the point of either, unless and until their transaction costs were less than that of cash. That... has not happened, to put it mildly.

        These things are NOT the same.

        • vivzkestrel12 hours ago
          that was not the sentiment people had in 2017. they were almost certain every major visa card provider and payment gateway on the planet will scrap fiat cash and adopt crypto just like how they are all thinking about every major software company adopting AI. Dont forget hindsight bias
          • Xss34 hours ago
            I dont know a single person that had such a belief.
      • mock-possum14 hours ago
        Except that nothing of crypto/web3 ever touched my day or day life - ‘blockchain’ is now shorthand for ‘scam’ - whereas LLM-generated content is now an everyday element of both personal and professional projects, and while we maybe already be seeing diminishing returns, even the current state of advancement has already changed how digital content is searched and created forever.

        The hype is real, but there’s actual practical affordable understandable day-to-day use for the tech - unlike crypto, unlike blockchain, unlike web3.

    • softwaredoug20 hours ago
      You can have a bubble, and still have profound impact from AI. See also the dotcom boom.
      • mikert8920 hours ago
        who cares about a bubble? we are on the cusp of intelligent machines. The implications will last for hundreds of years, maybe impact the trajectory of humanity
        • dcminter20 hours ago
          > we are on the cusp of intelligent machines

          That's an extremely speculative view that has been fashionable at several points in the last 50 years.

          • squidbeak18 hours ago
            How often in the last 50 years have those machines done what these machines do?
            • dcminter18 hours ago
              On every occasion you could have made exactly the same point.
            • bigyabai18 hours ago
              Define "do" in this context. If you mean hardware-accelerated matmul, then machines have been doing that for half a century.
          • tim33318 hours ago
            Things like getting gold in the math olympiad and 120 on iq tests are kinda cuspy and not been there in 49 of the last 50 years.
        • maxglute20 hours ago
          Bubble burst means current technical AI approach economic deadend, if the most resourced tech companies in the world can't afford to maintain AI improvment then it's probably not going to happen because public likely isn't going to let state spend $$$$ in lieu of services on sovereign Ai projects that will make them unemployed.
        • lifestyleguru20 hours ago
          Matrix is calling you on the rotary dial phone.
          • stanac20 hours ago
            Is that why we no longer have telephone boots? They want to prevent Neo from jumping in and out of Matrix?
        • mjr0020 hours ago
          > we are on the cusp of intelligent machines.

          Nah, we aren't. There's a reason the output of generative AI is called slop.

        • skywhopper20 hours ago
          Except we aren’t. They aren’t thinking, and they can’t actually make decisions. They generate language. That feels magical at first but in the end it’s not going to solve human problems.
        • daytonix20 hours ago
          brother please
        • Gattopardo20 hours ago
          My dude, it's literally just fancy autocomplete and isn't intelligent at all.
          • rusk20 hours ago
            Clippy with a Bachelors in web search
          • forgotusername620 hours ago
            What makes you so sure that you aren't just fancy autocomplete?
            • abathologist20 hours ago
              I am so sure because of the self-evidence of my experience, the results of 2 millennia of analysis into the nature of cognition and experience, and consideration of the material composition of our organism (we obviously have lots of critical analog components, which are not selecting tokens, but instead connecting with flows from other continua).

              Prediction is obviously involved in certain forms of cognition, but it obviously isn't all there is to the kinds of beings we are.

            • alganet20 hours ago
              I am sure that if I am a fancy auto-complete, I'm way fancier than LLMs. A whole different category of fancy way above their league. Not just me, but any living human is.
        • bigyabai20 hours ago
          > we are on the cusp of intelligent machines.

          Extraordinary claims demand extraordinary evidence. We have machines that talk, which is corollary to nothing.

          • fruitworks20 hours ago
            Sometimes I wonder if reason is partially a product of manipulating language.
            • card_zero18 hours ago
              It's nice that you say partially, that's bit different from every other HN comment that wondered this. Yeah, probably partially, as in, you have reason, you add in language, you get more reason.
            • toss117 hours ago
              Explanation of reason and reasoning paths is a product of manipulating language.

              Most ideas, even in the reasoning fields, are generated in non-linguistic processes.

              Of course, some problems are solved by step-by-step linguistic (or math) A, then B, then C steps, etc., but even for those types of problems, when they get complex, the solution looks more like follow a bunch of paths to dead ends, think some more, go away, and then "Aha!" the idea of a solution pops into our head, then we back it up and make it explicit with the linguistic/logical 'chain of reasoning' to explain it to others. That solution did not come from manipulating language, but from some other cognitive processes we do not understand, but the explanation of it used language.

              LLMs aren't even close to that type of processing.

    • rafavento19 hours ago
      Computers have been able to talk and make decisions from the beginning. Maybe you meant mimicking humans?
      • mikert8919 hours ago
        mimick is quite a loaded word
    • Starlevel00420 hours ago
      The self checkout machines at the supermarket can talk and make decisions. I don't see them revolutionising the world.
      • givemeethekeys20 hours ago
        > I don't see them revolutionising the world.

        They revolutionized supermarkets.

        • Starlevel00419 hours ago
          Unless you happen to be some sort of rodent that feeds off of discarded grains, the supermarket is not the world.

          And for small baskets, sure, but it was scan as you shop that really changed supermarkets and those things thankfully do not talk.

        • KPGv220 hours ago
          In what way?

          I would really like to hear you explain how they revolutionized supermarkets.

          I use them every day, and my shopping experience is served far better by going to a place that is smaller than one that has automated checkout machines. (Smaller means so much faster.)

          Hell, if you go to Costco, the automated checkout line moves slower than the ones manned by experienced workers.

      • ogogmad19 hours ago
        This is the most perfect troll comment I've ever seen. Bravo.
        • dcminter19 hours ago
          I think it's worth engaging with even if this guy's a troll (not saying he is) because it's not that freakish a view in the real world. What are the arguments to counter this kind of blind enthusiasm?
        • tim33318 hours ago
          It's kind of making a fair point more than trolling.
      • Legend244020 hours ago
        1. That’s not remotely the same, and you know it.

        2. The category of computerized machines (of which self checkouts are one example) has absolutely revolutionized the world. Computerization is the defining technology of the last twenty years.

        • alganet20 hours ago
          What is that category and what other machines are in it?
      • mikert8920 hours ago
        think bigger, because this certainly is. change on the order of years means nothing
        • Starlevel00420 hours ago
          Sorry, I don't believe in Calvinism.
    • zkmon20 hours ago
      Maybe you are not grasping the convergence effect of the overall socio-political-economic trends that could actually label AI output as abhorrent plastic pollution or atleast not a high priority for public good.
    • b_e_n_t_o_n20 hours ago
      What are they?
      • mikert8920 hours ago
        how could i even begin to list them, is the point of my original comment
        • lgas20 hours ago
          Just pick one then, since so far you've conveyed nothing at all about them, so we're all left to wonder what you might be thinking of.
        • 20 hours ago
          undefined
        • dcminter20 hours ago
          If they're that profound you should be able to come up with one example though, right?

          Not that I think you're wrong, but come on - make the case!

          I have the very unoriginal view that - yes, it's a (huge) bubble but also, just like the dot com bubble, the tevhnology is a big deal - but it's not obvious to see what will stand and fall in the aftermath.

          Remember that Sun Microsystems, a very established pre-dot com business, rose to huge heights on the bubble and was then smashed by the fall when it popped. Who's the AI bubble's Sun and who's its Amazon? Place your bets...

        • b_e_n_t_o_n20 hours ago
          Hahahaha right
    • parineum20 hours ago
      I don't think anyone underestimates that and a lot of people can't wait to see it.
      • mikert8920 hours ago
        anyone mentioning a bubble is underestimating the gravity of whats going on
        • ff2400t20 hours ago
          I think you aren't understanding the meaning of the world bubble here. No one can deny the impact LLM can have but it still has limits. And the term bubble is used here as an economic phenomenon. This is for the money that openai is planning on spending which they don't have. So much money is being l poured here, but most users won't pay the outrageous sums of money that will actually be needed for these LLM to run, the break even points looks so far off that you can't even think about actual profitability. After the bubble bursts we will still have all the research done, the hardware left and smaller llms for people to use with on device stuff.
          • mikert8920 hours ago
            the real innovation is that neural networks are generalized learning machines. LLMs are neural networks on human language. The implications of world models + LLMs will take them farther
            • KPGv219 hours ago
              The neural net was invented in the 1940s, and LLMs were created in the 1960s. It's 2025 and we're still using 80yo architecture. Call me cynical, but I don't understand how we're going to avoid the physical limitations of GPUs and data to train AIs on. We've pretty much exhausted the latter, and the former is going to hit sooner rather than later. We'll be left at that point with an approach that hasn't changed much since WW2, and our only solution is going to hit a physical limit law.

              Even in 2002, my CS profs were talking about how GAI was a long time off bc we had been trying for decades to innovate on neural nets and LLMs and nothing better had been created despite some of the smartest people on the planet trying.

              • mikert8919 hours ago
                they didnt have the compute or the data to make use of NNs. but theoretically NNs made sense even back then, and many people thought they could give rise to intelligent machines. they were probably right, and its a shame they didnt live to see whats happening right now
                • KPGv219 hours ago
                  > they didnt have the compute or the data to make use of NNs

                  The compute and data are both limitations of NNs.

                  We've already gotten really close to the data limit (we aren't generating enough useful content as a species and the existing stuff has all been slurped up).

                  Standard laws of physics restrict the compute side, just like how we know we will hit it with CPUs. Eventually, you just cannot put things closer together that generate more heat because they interfere with each other because we hit the physical laws re miniaturization.

                  No, GAI will require new architectures no one has thought of in nearly a century.

                  • tim33317 hours ago
                    We have evidence that general intelligence can be produced but a bunch of biological neurons in the brain and modern computers can process similar amounts of data to those so it's a matter of figuring how to wire it up as it were.
                    • jcranmer17 hours ago
                      Despite being their namesake, biological neurons operate quite distinctly from neural nets. I believe we have yet to successfully model the nervous system of the nematodes, with a paltry 302 neurons.
                  • mikert8919 hours ago
                    dude who cares about data and compute limits. those can be solved with human ingenuity. the ambiguity of creating a generalized learning algorithm has been solved. a digital god has been summoned
        • quesera20 hours ago
          I'm old enough to have heard this before, once or thrice.

          It's always different this time.

          More seriously: there are decent arguments that say that LLMs have an upper bound of usefulness and that we're not necessarily closer to transcending that with a different AI technology than we were 10 or 30 years ago.

          The LLMs we have, even if they are approaching an upper bound, are a big deal. They're very interesting and have lots of applications. These applications might be net-negative or net-positive, it will probably vary by circumstance. But they might not become what you're extrapolating them into.

          • subjectivationx17 hours ago
            I love chatGPT5 and Claude but they aren't as big of a deal as going from no internet to having the internet.

            That I think is the entire mistake of this bubble. We confused what we do have with some kind of science fiction fantasy and then have worked backwards from the science fiction fantasy as if it is inevitable.

            If anything, the lack of use cases is what is most interesting with LLMs. Then again, "AI" can do anything. Probabilistic language models? Kind of limited.

        • muldvarp19 hours ago
          The internet was world changing and the dotcom bubble was still a bubble.
    • Mistletoe20 hours ago
      How do we make money on it, especially if massive amounts of the population lose their jobs?
      • falcor8420 hours ago
        You know the quote "it is easier to imagine an end to the world than an end to capitalism"? Well, AI has allowed me to start imagining the end of capitalism. I'm not claiming we're necessarily very close, but I can clearly see the way from here to a post-scarcity society.
        • Peritract20 hours ago
          How do you square that with all current AI development being intensely capitalistic?
          • pixl9718 hours ago
            All kinds of things commit suicide, intentional and unintentional.
        • rootusrootus19 hours ago
          > I can clearly see the way from here to a post-scarcity society.

          I would be interested to hear the way that you see. I don't have any problem seeing a huge number of roadblocks to post-scarcity that AI won't solve, but I am open to a different perspective.

          • falcor8430 minutes ago
            Ok, so as a disclaimer: this obviously leans towards science-fiction, both because that informs my view of the world, and because I think that any prediction of the future must incorporate science fiction.

            My own experience, using ChatGPT and Claude for both dev and other business productivity tasks, lends credence to the METR model of exponential improvement in task time-horizon [0]. There are obviously still significant open technical issues, particularly around memory/context management and around online learning, but extensive work is being done on these fronts, propelled amongst other things by the ARC-AGI challenge [1], and I don't see anything that is an actual roadblock to progress. If anything, from my perspective, it seems appears that there are significant low-hanging-fruit opportunities around plain-old software engineering and ergonomics for AI agents, more so than a need for fundamental breakthroughs in neural network architecture (although I believe that these too will come).

            So then, with an increasing time horizon and improved task accuracy (much of it assured by improvements in QA mechanisms), we will see ourselves handing off more and more complex tasks to AI agents, until eventually we could have "the factory of the future ... [with] only two employees: a man and a dog", and at that stage I believe that there would be no imperative for humans to work (unless they choose to, or have a deeply ingrained Calvinist work ethic). And then, as you said, we're down to the non-technological roadblocks.

            Obviously capitalists would fight to stay in control, and unlike some who expect a fully peaceful and organic transition, I do expect somewhat of a war here (whether kinetic or cold), but I do envision that when push comes to shove, those of us who believe in the free software movement and the foundational principles of democracy will be able to assert shared national/international (rather than corporate) control over the AIs and restructure society into a form where AI (and later robots) perform the work for the benefit of humans who would all share in the bounty. I am not an economist and don't have a clear prediction on the exact form this new society would take, but from my reading of the various pilot implementations of UBI [2], I think that we will see acceptance towards a society where people are essentially in retirement throughout their life. Just as currently, some retired people, choose to only stay home and watch TV, while others study, do art, travel the world, help raise and teach future generations or contribute to social causes close to their hearts, so we'll all be able to do what is in our hearts, without worrying about subsistence.

            You may say that I'm a dreamer...

            [0] https://metr.org/blog/2025-03-19-measuring-ai-ability-to-com...

            [1] https://arcprize.org/leaderboard

            [2] https://en.wikipedia.org/wiki/Universal_basic_income_pilots

        • sensanaty9 hours ago
          The only reason this shit is getting pushed on us as hard as it is because of hyper-capitalist psychos like Thiel and Altman. You are buying into the capitalist hype by actually thinking these systems will be used for anything other than fattening the capitalist's wallets, anything else is a sci-fi fantasy
        • gosub10019 hours ago
          That's hypothetically possible if a government somehow forced corporations to redistribute their wealth. But a civil war is equally likely. Where society is destroyed after poor people with guns have their say.
      • mikert8920 hours ago
        dude again, we have computers that can talk and make decisions. We have birthed something here. We have something, this is big.
        • dgacmu20 hours ago
          I had a computer that could talk and make decisions in 1982. It sounded pretty robotic and the decisions were 1982-level AI: Lots of if statements.

          I'm not really trying to be snarky; I'm trying to point out to you that you're being really vague. And that when you actually get really, really concrete about what we have it ... starts to seem a little less magical than saying "computers that talk and think". Computers that are really quite good at sampling from a distribution of high-likelihood next language tokens based upon complex and long context window is still a pretty incredible thing, but it seems a little less likely to put us all out of a job in the next 10 years.

          • pixl9718 hours ago
            >I had a computer that could talk and make decisions in 1982.

            And it became and industry that as completely and totally changed the world. The world was just so analog back then.

            >starts to seem a little less magical than saying "computers that talk and think"

            Computer thinking will never become magical. As soon as we figure something out it becomes "oh that is just X". It is human thinking that will become less magical over time.

            • crote17 hours ago
              The "talking and making decisions" part didn't change the world, though. It was the digital spreadsheets and letters that did.
        • mjhay20 hours ago
          Even by HN standards, this is just an incredible comment. You’d think it’s satire, but I doubt it.
          • rootusrootus20 hours ago
            One of the reasons I like to swing by HN on the weekend is that the flavor of the comments is a lot spicier. For better or worse.
            • cactusplant737420 hours ago
              Is that a thing now?
              • noir_lord20 hours ago
                Has been for a while, the difference isn’t huge but it does seem to be a difference.

                Slightly different cohorts.

                • mikert8919 hours ago
                  ive been on HN for about 15 years :)
                  • what17 hours ago
                    Yet your account is only 3 years old. Do you just constantly have to hide the silly things that you say?
            • mikert8920 hours ago
              see i was thinking my comments didnt go far enough in describing what we are witnessing
        • dcminter20 hours ago
          Define "make decisions" such that an 'if' statement does not qualify but an llm does.

          LLMs may be a stepping stone to AGI. It's impressive tech. But nobody's proven anything like that yet, and you're running on pure faith not facts here.

          • mikert8919 hours ago
            i mean, if given the choice between using a coding agent like the codex ui, or a CS intern, I would take the coding agent every time. to me its self evident whats going on
            • rootusrootus19 hours ago
              I get a new batch of CS interns for my team every year, and I use Claude Code every day. I think Claude is pretty amazing and it definitely provides value for me, but I would choose the intern every time. I am really skeptical of any claims of getting the kind of productivity and capability growth out of an LLM that would make it an adequate replacement for a human developer.
            • dcminter19 hours ago
              Well frankly your lack of concrete arguments makes it seem a lot like you don't actually understand what's going on here.

              I'm enjoying the new LLM based tooling a lot, but nothing about it suggests that we're in any way near to AGI because it's very much a one trick pony so far.

              When we see generative AI that updates its weights in real time (currently an intractible problem) as part of the feedback loop then things might get very interesting. Until then it's just another tool in the box. CS interns learn.

        • cactusplant737420 hours ago
          You should post something more substantial than this.
          • mikert8919 hours ago
            its a problem of imagination, "situational awareness". People are simply not aware of what we have discovered, their minds cannot see beyond a chatbox. thats not even to mention the smoothness of the loss functions the big ai labs are seeing, the smooth progression of the scaling laws. its all there, it progresses daily
        • skywhopper20 hours ago
          The last thing we need is a bunch of random chatter, but that’s all these things can do. They can’t make decisions because they don’t actually relate to the real world. Humans (like you, apparently) may think a talking computer is a magic oracle but you’re fooling yourself.
    • techblueberry17 hours ago
      Given the failure rate of people actually being able to use AI, and the continued inability to find any meaningful use-case, comments like this are starting to feel like cope. Call me in 10 years from your bitcoin operated phone and tell me all about the “revolution”.
    • AstroBen20 hours ago
      ..except they can't

      It's blatantly obvious to see if you work with something you personally have a lot of expertise in. They're effectively advanced search engines. Useful sure.. but they're not anywhere close to "making decisions"

      • fragmede17 hours ago
        In what sense? It seems entirely possible to have a computer program that calls ChatGPT with questions on what stocks to buy, and for that computer program to then trade stock based on the results, entirely autonomously. No matter your opinion of the idea itself, why wouldn't that count as "making decisions"?
        • AstroBen16 hours ago
          Are you buying stocks based on ChatGPT's advice?
          • fragmede9 hours ago
            Not especially. The computer program does all the work. It's the one that hits ChatGPT for a list of trades, and then the computer program hits the brokerage's API to execute the trades. I made the decision to set the program up in the first place, and the trades are happening on an account that has my name on it, sure, but as I'm not nit-picking each individual trade that gets made and it runs autonomously without a human in the loop, it seems fair to claim that ChatGPT is making decisions on what to buy on my behalf, even though I do have veto authority over the program and ChatGPT, and can stop it at any time.
            • AstroBen4 hours ago
              My point was on the quality and reliability of the decisions

              An RNG can do what you're describing

      • gosub10019 hours ago
        An RNG can "make a decision" lol.
    • gosub10019 hours ago
      I can't believe I still have to do my own laundry and dishes. Like that's some how way more powerful than the models of a megawatt powered data center and millions of dollars in 3nm silicon can conquer.
      • jiggawatts18 hours ago
        … by hand? With water you heated on a wood fire, the ashes of which you turned into potash so you can make your own soap?

        Or did you pop your laundry into a machine and your dishes into another one and press a button?

    • elorant20 hours ago
      Speech to text and vice versa exists for over a decade. Where's the life altering application from that?
      • dragonsky6717 hours ago
        tell that to the medical typing pool that no longer exists.
      • KPGv220 hours ago
        > Speech to text and vice versa exists for over a decade.

        Indeed. I was using speech to text three decades ago. Dragon Naturally Speaking was released in the 90s.

        • lxgr18 hours ago
          Then you hopefully remember how “natural” it actually was.
          • imtringued6 hours ago
            The old voice bank for the redhead with drills had more soul.
      • mikert8920 hours ago
        [flagged]
        • dcminter20 hours ago
          > this is not an intellectually honest claim

          Don't do that, it's not cool.

          > Take [...] a step further, and imagine the systems in 2035

          How about imagining AI slop multiplied by 10 years. How bad is this going to get?

          It's cool that you're excited, but you need a bit more than enthusiasm to make the case.

    • Findecanor19 hours ago
      “The ability to speak does not make you intelligent.”
      • dcminter19 hours ago
        "Empty vessels make the loudest noise" as my headmaster used to rather pointedly quote to me from time to time.
      • mikert8919 hours ago
        again, only a few years ago, the concept of a real time voice conversation with a computer was straight out of science fiction
        • dcminter19 hours ago
          This is true. The old original series (and later) Star Trek computers being able to interpret normal idiomatic humam speech and act upon it was, to those in the know, hilariously unrealistic until very suddenly just recently it wasn't. Pretty cool.
          • mikert8919 hours ago
            pretty much all of the classical ideas of what an ai could do, can be done with our existing capabilities. and yet, people continue to live as if the world has not changed
            • dcminter19 hours ago
              "AI" has been doing that since the 1950s though. The problem is that each time we define something and say "only an intelligent machine can X" we find out that X is woefully inadequate as an example of real intelligence. Like hilariously so. e.g. "play chess" - seemed perfectly reasonable at the time, but clearly 1980s budget chess computers are not "intelligent" in any very useful way regardless of how Sci Fi they were in the 40s.

              So why's it different this time?

              • mikert8918 hours ago
                yeah im in agreement, ai will eventually do everything a human being can
                • dcminter18 hours ago
                  Perhaps, but why are you so convinced we're so close when we weren't all the other times?
                  • ACCount3716 hours ago
                    Not OP, but I do think that because "humanlike abstract thinking and informal reasoning is completely unnatural to how computers work, and it's borderline impossible to make them do that" was by far the biggest AI roadblock, in my eyes.

                    And we've made it past. LLMs of today reason a lot like humans do.

                    They understand natural language, read subtext, grasp the implications. NLP used to be the dreaded "final boss" of AI research - and now, what remains of it is a pair of smoking boots.

                    What's more is that LLMs aren't just adept at language. They take their understanding of language and run with it. Commonsense reasoning, coding, math, cocktail recipes - LLMs are way better than they have any right to be at a range of tasks so diverse it makes you head spin.

                    You can't witness this, grasp what you see, and remain confident that "AGI isn't possible".

    • 19 hours ago
      undefined
    • flyinglizard20 hours ago
      From a software development perspective, the more I think of it, the more I understand it's just another abstraction layer. Before that came high level languages and JVM of sorts, before that came the compiler, before that came the assembler.

      Outside of the software world it's mostly a (much!) better Google.

      Between now and a Star Trek world, there's so much to build that we can use any help we can get.

      • mikert8920 hours ago
        yeah we have fuzzy computer interfaces now, instead of "hard coded" apis.
  • bitmasher920 hours ago
    > GPUs that have a 1-3 year lifespan

    In 10 years GPUs will have a lifespan for 5-7 years. The rate of improvement on this front has been slowing down faster then CPU.

    • laluser18 hours ago
      The reason for not keeping them too much longer than a few years is that at the end of that timespan you can purchase GPUs with > 2x performance, but for the same amount of power. At some point, even though the fleet has been depreciated, they become too expensive to operate vs. what is on the market.
    • mike_hearn18 hours ago
      There's some telephone game being played here.

      The three year number was a surprisingly low figure sourced to some anonymous Google engineer. Most people were assuming at least 5 years and maybe more. BUT, Google then went on record to deny that the three year figure was accurate. They could have just ignored it, so it seems likely that three years is too low.

      Now I read 1-3 years? Where did one year come from?

      GPU lifespan is I suspect also affected by whether it's used for training or inference. Inference loads can be made very smooth and don't experience the kind of massive power drops and spikes that training can generate.

      • trenchpilgrim18 hours ago
        > Where did one year come from?

        Perhaps the author confused "new GPU comes out" with "old GPU is obsolete and needs replacement"

      • dingaling10 hours ago
        > Now I read 1-3 years? Where did one year come from?

        I believe that lifespan range came from cryptocurrency mining experience, running the GPU at 100% load constantly until components failed.

    • stanac20 hours ago
      There was a video on YT (gamers nexus I think) and an excel sheet comparing jumps in performance gains between each new nvidia generation. They are becoming smaller and smaller, probably now driven by AI boom where most of the silicon is used for data centers. Regardless of that, I have a feeling we are approaching the ceiling of chip performance. Just comparing PS3 and PS4 and then PS4 with PS5, performance jump is smaller and size of the hardware has become enormous and GPUs are more and more power hungry. If generational jumps were good enough we wouldn't need more power and cooling and big boxes for desktop PCs that can hold long graphic cards.
      • bobthepanda19 hours ago
        We also have hit ceilings of performance demand. As an example, 8K TVs never really got off the ground because your average consumer could give a hoot. Vision Pro is a flop because AR/VR are super niche. Crypto is more gambling than asset class. Etc.

        What is interesting is that it seems like the ever larger sums of money sloshing around are resulting in bigger, faster hype cycles. We are already seeing some companies face issues after blowback from adopting AI too fast.

        • bitmasher914 hours ago
          It’s funny, but I’m actually very interested in a 8k display with the right parameters. I like to use large displays for monitors, and the pixel density of 4k at a large display is surprisingly low.
        • jay_kyburz17 hours ago
          Having a local AI might kick start consumer interest in hardware again. I haven't thought about buying a beefy PC in 10 years, but I was wondering how much I would have to spend to run a cutting edge LLM here at home.

          (It might be too expensive to pay for LLM subscriptions when every device in your house is "thinking" all day long. A 3-5k Computer for a local llm might pay itself off after a year or two. )

      • imtringued5 hours ago
        Most of the performance gains also came from smaller datatypes such as fp4. Going from bfloat16/fp16 to fp8 is easy. fp4 is challenging but possible. fp2 can't exist independently, because it is just the sign bit plus a single bit.

        The next frontier would be training directly with block floating point, where you have a shared exponent plus the two remaining bits. It's getting tight.

        Maybe it is possible to have mini LoRA blocks where an n times n block is approximated by the outer product of two n sized vectors. For n = 4 the savings would be 50% less FLOPs and for n=8 the savings would be 75% less FLOPs.

    • nemomarx20 hours ago
      After heavy use, though? I don't think they mean aging out of being cutting edge but actually starting to fail sooner after being used in DCs.
      • trenchpilgrim20 hours ago
        They're mostly solid state parts. The parts that do wear out like fans are easily replaced by hobbyists.
        • pclmulqdq19 hours ago
          This is not correct. Solid-state parts wear out like mechanical parts, especially when you run them hot. The mechanism of this wear-out comes from things like electromigration of materials and high-energy electrons literally knocking atoms out of place.
          • trenchpilgrim19 hours ago
            Those parts take decades to wear out, excepting a manufacturing defect.
            • pclmulqdq18 hours ago
              That is not correct when wires are small and you run things hot. The physics of <10 nm transistor channels is very different than it is for 100 nm+ transistors.
              • trenchpilgrim18 hours ago
                Your assertions are at odds with modern computers - including Nvidia datacenter GPUs - still working fine after many, many years. If not for 1. improved power efficiency on new models and 2. Nvidia'a warranty coverage expiring, datacenters could continue running those GPUs for a long time.
                • pclmulqdq18 hours ago
                  Which GPUs have been running for decades that you're referring to? The A100s that are 4 years old? MTBFs for GPUs are about 5-10 years, and that's not about fans. AWS and the other clouds have a 5-8 year depreciation calendar for computers. That is not "decades."

                  You can keep a server running for 10-15 years, but usually you do that only when the server is in a good environment and has had a light load.

                  • trenchpilgrim18 hours ago
                    > Which GPUs have been running for decades that you're referring to? The A100s that are 4 years old? MTBFs for GPUs are about 5-10 years, and that's not about fans.

                    I said solid state components last decades. 10nm transistors have a thing for over 10 years now and other than manufacturer defect don't show any signs of wearing out from age.

                    > MTBFs for GPUs are about 5-10 years, and that's not about fans.

                    That sounds about the right time for a repaste.

                    > AWS and the other clouds have a 5-8 year depreciation calendar for computers.

                    Because the manufacturer warranties run out after that + it becomes cost efficient to upgrade to lower power technology. Not because the chips are physically broken.

        • nemomarx20 hours ago
          I swear during the earlier waves of bitcoin mining (before good ASICs came out) people ran them overclocked and did cause damage. Used GPUs were pretty unreliable for a while there.
          • trenchpilgrim19 hours ago
            1. Miners were clocking them beyond the factory approved speeds - something not needed for AI, where the bottleneck is usually VRAM, not clock speed.

            2. While comprehensive studies were never done, some tech channels did some testing and found used GPUs to be generally reliable or easily repairable, when scamming was excluded. https://youtu.be/UFytB3bb1P8

    • enord20 hours ago
      Wait… are you betting on exponential or logarithmic returns?
    • bee_rider20 hours ago
      The full quote is:

      > Most of the money is being spent on incredibly expensive GPUs that have a 1-3 year lifespan due to becoming obsolete quickly and wearing out under constant, high-intensity use.

      So it isn’t entirely tied to the rate of obsolescence, these things apparently get worn down from the workloads.

      In terms of performance improvement, it is slightly complicated, right? It turns out that it was possible to do ML training on existing GPGPU. Then there was spurt of improvement as they go after the low-hanging fruit for that application…

      If we’re talking about what we might be left with after the bubble pops, the rate of obsolescence doesn’t seem that relevant anyway. The chips as they are after the pop will be usable for the next thing or not, it is hard to guess.

      • bitmasher919 hours ago
        The failure rate of GPUs in professional datacenter environments is over estimated by the general public because of the large number of overlocked and undercooled cards used for GPU mining that hit eBay.
        • bradleyjg19 hours ago
          What’s the strategy when one does die? It’s just left in place until it’s worth it to pull the entire rack?
          • wmf16 hours ago
            No. When a single GPU fails it makes the whole 8-GPU server useless so it needs to be swapped promptly.
            • bradleyjg3 hours ago
              Interesting. So that implies that all these new data centers are more labor intensive than the pre llm setup which afaik was largely build out and lock the door.
    • lossolo19 hours ago
      I'm still waiting to get a used Nvidia A100 80 GB (released in 2020) for well under $10,000.
  • scellus20 hours ago
    He writes as if only datacenters and network equipment remain after the AI bubble bursts. Like there won't be any AI models anymore, nothing left after the big training runs and trillion-dollar R&D, and no inference served.
    • rjh2920 hours ago
      Who's going to pay to run those models? They are currently running at a huge loss.
      • WalterSear19 hours ago
        Anthropic said their inference is cash positive. I would be very surprised if this isn't the norm.
        • timmytokyo15 hours ago
          As if inference exists in a bubble. Driving a car from point A to point B costs $0, as long as you exclude the cost of the car or the fuel you purchased before you were at point A.
          • 15 hours ago
            undefined
        • rich_sasha13 hours ago
          I believe that, equally it's so unverifiable that it's a point of faith.

          I'm not suggesting it's an outright lie, but rather it's easy to massage the costs to make it look true even if it isnt. Eg does GPU cost go into inference cost or not?

        • surgical_fire19 hours ago
          I would be surprised if they are being honest.
          • squidbeak18 hours ago
            I'd be more surprised if they didn't know their own business costs.
            • wmf16 hours ago
              There's an accounting question of whether they count free tier inference as COGS or marketing.
          • tayo4216 hours ago
            Aren't they taking investor money? It would be a huge scandal if they're lying?
      • harvey920 hours ago
        I can run quite useful models on my PC. Might not change the world but I got a usable transcript of an old foreign language TV show and then machine translated to English. It is not as good as professional subtitles but i wasn't willing to pay the cost of that option.
        • mmh000017 hours ago
          I did something similar with Whisper a year or so ago.

          9 years ago, when my now wife and I were dating, we took a long cross-country road trip, and for a lot of it, we listened to NPR's Ask Me Another (a comedy trivia game).

          Anyway, on one random episode, there was a joke in the show that just perfectly fit what we were doing at that exact moment. We laughed and laughed and soon forgot about it.

          Years later, I wanted to find that again and purposely recreate the same moment.

          I downloaded all 300 episodes as MP3s. I used Whisper to generate text transcripts, followed by a little bit of grepping, and I found the one 4-second joke that otherwise would have been lost to time.

          • jkestner16 hours ago
            Now, at the price you paid to retrieve that memory, is it a viable business model?
            • mmh000015 hours ago
              I downloaded 2GiB of data and let a script run for 56 hours. Besides a bit of my time, which I found to be enjoyable, it didn't cost me anything.

              Maybe you could argue it cost some electricity, but... In reality, it meant my computer, which runs 24/7 pulling ~185W, was running at ~300W for 56 hours... Thusly.. 300 - 185 = 115W * 56H = 6.44kWh @ $0.13 per kWh = $0.85 + tax.

              So... Yes, it was very much worth $0.85 to make my wife happy.

              • kcexn4 hours ago
                It's a little bit more complicated than that if you were running a business.

                You would want to add the cost of your network+hardware depreciating over the timeframe, and you probably can't just ignore the first 185W since if you are Anthropic it doesn't seem likely that the idle power draw would be needed if they weren't expecting to serve AI traffic.

                So, let's say $0.02 per hour ($1/50 roughly). That's about $15 per month per user. Let's call it $10 per month per user since users aren't constantly hammering the service. To support a big sales and marketing engine, you would like to be selling subscriptions for $100+ per month. I'm just not sure people are prepared to pay that for AI in its current form.

              • fennecbutt2 hours ago
                Damn, I hope you realise how cheap that electricity is.
        • joshuahedlund19 hours ago
          Won’t those models gradually become outdated (for anything related to events that happen after the model was trained, new code languages or framework versions, etc) if no one is around to continually re-train them?
          • jay_kyburz17 hours ago
            They should be fine for things that don't change. (which is a lot of stuff)

            If you are feeding the LLM a report, and asking it for a summary, it doesn't need the latest updates from Wikipedia or Reddit.

        • surgical_fire19 hours ago
          "we will be left with local models that can be sort of useful but also sort of sucks" is not really a great proposition for the obscene amount of money being invested in this.
      • mike_hearn18 hours ago
        There's a gazillion use cases for these things in business that aren't even beginning to be tapped yet. Demand for tokens should be practically unlimited for many years to come. Some of those ideas won't be financially viable but a lot will.

        Consider how much software is out there that can now be translated into every (human) language continuously, opening up new customers and markets that were previously being ignored due to the logistical complexity and cost of hiring human translation teams. Inferencing that stuff is a no brainer but there's a lot of workflow and integration needed first which takes time.

      • quesera20 hours ago
        Running the models is cheap. That will be worthwhile even if the bubble pops hard. Not for all of the silly stuff we do today, but for some of it.

        Creating new LLMs might be out of reach for all but very well-capitalized organizations with clear intentions, and governments.

        There might be a viable market for SLMs though. Why does my model need to know about the Boer wars to generate usable code?

        • mordymoop16 hours ago
          Perhaps surprisingly considering the current stratospheric prices of GPUs, the performance-per-dollar of compute is still rising faster than exponentially. In a handful years it will be cheap to train something as powerful as the models that cost millions to train today. Algorithmic efficiencies also stack up an make it cheaper to build and serve older models even on the same hardware.

          It’s underappreciated that we would already be in a pretty absurdly wild tech trajectory just due to compute hyperabundance even without AI.

      • dcre17 hours ago
        They are obviously running free users at a loss. Can you point to evidence of negative margins on subscriptions and enterprise contracts?
      • logicchains20 hours ago
        They're not running at a loss. Training runs at a loss, but the models are profitable to serve if you don't need to continuously train new models.
        • jayd1618 hours ago
          But you do or you're missing current events, right?
          • dcre17 hours ago
            Not at all, otherwise models with knowledge cutoffs of six months to a year ago (all current SOTA models) would be useless. Current information is fed into the model as part of the prompt. This is why they use web search.

            The main reason they train new models is to make them bigger and better using the latest training techniques, not to update them with the latest knowledge.

            • 17 hours ago
              undefined
          • jay_kyburz17 hours ago
            I'm trying to avoid getting into the habit of asking LLMs about current events, or really any events. Or really facts at all.

            I think LLMs work best when you give it data, and ask it to try make sense of it, or find something interesting, or some problem. To see something I can't see, then I can go back and go back to the original data and make sure its true.

          • fragmede17 hours ago
            There are a number of techniques to modify a model post-training. Some of those techniques allow adding current events to the model's "knowledge" without having to do an entire from-scratch training run, saving money.
      • antonvs20 hours ago
        I run models for coding on my own machines. They’re a trivial expense compared to what I earn from the work I do.

        The “at a loss” scenario comes from (1) training costs and (2) companies selling tokens below market to get market share. Neither of those imply that people won’t run models in future. Training new frontier-class models could potentially become an issue, but even that seems unlikely given what these models are capable of.

        • surgical_fire19 hours ago
          It's unclear if people would pay the price to use them if they were not below market.

          I have access to quite a few models, and I use them here and there. They are sort of useful, sometimes. But I don't pay directly for any of them. Honestly, I wouldn't.

        • Juliate20 hours ago
          Ok, running them locally, that's definitely a thing.

          But then, without this huge financial and tech bubble that's driven by these huge companies:

          1/ will those models evolve, or new models appear, for a fraction of the cost of building them today?

          2/ will GPU (or their replacement) also cost a fraction of what they cost today, so that they are still integrated in end-user processors, so that those model can run efficiently?

          • azeirah20 hours ago
            Given the popularity and activity and pace of innovation seen on /r/LocalLLaMa, I do think models will keep improving. Likely not at the same pace as they are today, but those people love tinkering but it's mostly enthusiasts with a budget for a fancy setup in a garage, independent researchers and smaller businesses doing research there.

            These people won't sit still and models will keep getting better as well as cheaper to run.

            • antonvs14 hours ago
              No-one on LocalLlama is training their own models. They’re working with foundation models like Llama from Meta and tweaking them in various ways: fine tuning, quantizing, RAG, etc. There’s a limit to how much improvement can be made like that. The basic capabilities of the foundation model still constrain what’s possible.
      • qgin20 hours ago
        The models get more efficient every year and consumer chips get more capable every year. A GPT-5 level model will be on every phone running locally in 5 years.
        • qgin13 hours ago
          Why such a reaction to this statement? Is this not the track we're on?
        • swarnie20 hours ago
          Can i sign up for an alterative future please? This one sounds horrendous.
    • myhf12 hours ago
      Is there a genuine use case for today's models, other than for identifying suckers? You can't even systematically apply an LLM to a list of text transformation tasks, because the ability to produce consistent results would make them less effective sycophants.
    • Juliate20 hours ago
      The point is, after the bubble burst, will there be enough funds, cash flow and... a viable market, to make these still run?
      • muldvarp19 hours ago
        Inference is not that expensive. I'd argue that most models are already useful enough that people will pay to run them.
        • rootusrootus19 hours ago
          At $20/month for Claude, I'm satisfied. I'll keep paying that for what I get from it, even if it never improves again.
          • Juliate6 hours ago
            Of course, but my point is that I don't think it's economically sustainable. If innovation/funding in AI stalls, those $20 will likely skyrocket fast.
  • paulhodge15 hours ago
    AI is too useful to fail. Worst case with a bust is that startup investment dries up and we have a 'winter' of delayed improvement. But people aren't going to stop using the models we have today.
  • wmf16 hours ago
    I think several of the assumptions are wrong.

    GPUs will last much longer after the crash because there won't be any money available to replace them. You can either keep running the existing GPUs or throw them in the trash. The GPUs will keep running as long as they can generate enough revenue from inference to cover electricity. Tokens will become very cheap but free tokens might go away.

    AI datacenters aren't that specialized. You can buy a 1 GW datacenter and put 300 MW of equipment in it and sell 2/3 of the gas generators. You'll have to buy some InRow units.

    The AI stack isn't as proprietary as it sounds. GPU rental exists today and it's pretty interoperable. Ironically, Nvidia's moat has made their GPUs a de facto standard that is very well understood and supported by whatever software you want.

  • rz2k19 hours ago
    Local/open-weight models are already incredibly competent. Right now a Mac Studio with 256GB can be found for less than $5000, and an equivalent workstation will likely be 50% cheaper in a year. If anything that price is higher because of the boom, rather than subsidized by a potential bubble. It can run a 8bit quant of GPT-OSS 120B, or 4bit quant of GLM-4.6 using only an extra 100-200W. That energy use comes out to about 100 Joules or 1/4 Wh per query and response, and is already competitive with the power efficiency of even Google's offerings.

    I think that people doing work in many professions with these offline tools alone could more than double their productivity compared to their productivity two years ago. Furthermore if the usage was shared in order to lower idle time, such as 20 machines for 100 workers, the initial capital outlay is even lower.

    Perhaps investors will not see the returns they expect, but it is difficult to image how even the current state of AI doesn't vastly change the economy. There could be significant business failures among cloud providers and attempts to rapidly increase the cost of admission to closed models, but there's essentially no possibility of productivity regressing to a pre-AI levels.

    • tyleo19 hours ago
      I have an M4 MBP and I also think Apple is set up quite nicely to take real advantage of local models.

      They already work on the most expensive Apple hardware. I expect that price to come down in the next few years.

      It’s really just the UX that’s bad but that’s solvable.

      Apple isn’t having to pay for each users power and use either. They sell hardware once and folks pay with their own electricity to run it.

      • hiq18 hours ago
        Your comment made me realize that there's also the benefit of not having to handle the hardware depreciation, it's pushed to the customer. And now Apple has renewed arguments to sell better machines more often ("you had ChatGPT 3-like performance locally last year, now you can get ChatGPT 4-like performance if you buy the new model").

        I know folks who still use some old Apple laptops, maybe 5+ years old, since they don't see the point in changing (and indeed if you don't work in IT and don't play video games or other power-demanding jobs, I'm not sure it's worth it). Having new models with some performant local LLM built-in might change this for the average user.

        • 17 hours ago
          undefined
      • fennecbutt7 hours ago
        Lmao it took Apple like a decade or soaafter everyone else to offer 16gb ram as default.

        You won't be getting cheap Apple machines chock full of ram any time soon, I can tell you that. That goes against Apple's entire pricing structure/money making machine.

  • nuc1e0n15 hours ago
    Centuries ago, the building materials for castles was taken by the locals and reused to build their houses with. Those data centres have a lot of sheet metal in them. Air conditioning units as well. The upgraded power networks around those data centres will alter where people live too. Oh and there's a lot of electric motors with copper wire in all the fans and hard drives those computers have.
  • zkmon20 hours ago
    Also, the eco-system plays the biggest controlling role in the bubble and its aftermath. Ecosystem of social, political and business developments. Dotcom aftermath still had the wind from all the ecosytem trends that brought the dotcom back with bigger force. If the post AI hype world still has high priority for these talking bots, then maybe it's comparable to dotcom. If the world has other bigger basic issues that need attention, then yes, it could become a pile of silent silicon.
  • firefoxd18 hours ago
    I wrote a similar article (not published yet), but my conclusion was "Free GPUs for everyone" or at least cheap ones. Right now H100 are very specialized for the AI pipeline, but so were GPUs before the AI boom. I expect we will find good use for them.
    • cheschire16 hours ago
      Is folding@home still a thing?
  • becomevocal11 hours ago
    An opportunity to provide more accessible compute for non AI tasks via new methods of utilizing GPUS?
  • dinobones20 hours ago
    GPUs still won't be cheap
    • lifestyleguru20 hours ago
      What will be the next thing eating all GPUs, after crypto and now AI?
      • tobias319 hours ago
        The autonomous drones fighting in the next war (let's hope not...).
        • lifestyleguru11 hours ago
          the only hope is that there will be not enough electricity, and that population will not be deprived of electricity too brutally
        • WalterSear19 hours ago
          IMHO, it will be autonomous robotics - one way or another.
          • 11 hours ago
            undefined
      • willis93618 hours ago
        I can't tell you the name, but it will be another scam all the same.
  • 9 hours ago
    undefined
  • flyinglizard20 hours ago
    I admit to only being in this industry for three decades now, and only designing and implementing the thermal/power control algo of an AI chip family for three years in that time, but it's the first time I hear of chips "wearing under high intensity use".
    • bsaul20 hours ago
      thanks for that comment. I know absolutely nothing about chip designs, but i too was under the assumption that chips, like anything, wear out. And the more you use them, the more they do.

      Is the wear so small that it’s simply negligible ?

      • flyinglizard20 hours ago
        As long as you keep temperatures and currents in check, there's no reason for a chip under load to fare worse than an idle chip. Eventually, maybe, but not in the 5-10 year lifespan expected of semiconductors.
        • sam_bristow19 hours ago
          Wasn't there a phenomenon with the GPUs being retired from crypto mining operations being basically cooked after a couple of years. Likely because they weren't keeping temperatures in check and just pushing the cards to their limits.
    • cyberax19 hours ago
      Chips absolutely do wear out. Dopants electromigrate and higher temperatures make it faster. Discrete components like capacitors also tend to fail over time.

      Is it going to be that significant though? No idea.

      • ACCount3718 hours ago
        Depends on the design, and how hard you push it.

        Just ask Intel what happened to 14th gen.

        It's not normally an issue, but the edge cases can be very sharp. Otherwise, the bigger concern is the hardware becoming obsolete because of new generations being significantly more power efficient. Over a few years, the power+cooling+location bill of a high end CPU running at 90% utilization can cost more than the CPU itself.

      • pixl9718 hours ago
        Honestly it depends on a whole lot. If they are running 'very' hot, yea they burn out faster. If they have lots of cooling and heating cycles, yea, they wear out faster.

        But with that said machines that run at a pretty constant thermal load within range of their capacitors can run a very long time.

  • pizzly18 hours ago
    If there is a downturn in AI use due to a bubble then the countries that have built up their energy infrastructure using renewal energy and nuclear (both have decade long returns after the initial investment) will have cheaper electricity which will lead to a future competitive advantage. Gas powered power plants on the other hand require constant gas to convert to electricity. The price of gas would become the price of electricity regardless and thus very little advantage.
  • archerx20 hours ago
    I believe the next step will be robotics and getting A.I. to interact with the physical world at human fidelity.

    Maybe we can finally have a Rosie from the Jetsons.

    • blibble19 hours ago
      > Maybe we can finally have a Rosie from the Jetsons.

      just what I want, a mobile Alexa that spews ads and spies on me 24/7

      • archerx9 hours ago
        I don’t think Amazon will be the only one making them so maybe don’t buy one from a shitty company.
    • trollbridge19 hours ago
      Robots still haven’t come close to replicating human and animal touch as a sense, and LLMs don’t do anything to help with that.
      • archerx9 hours ago
        Have you seen what Boston dynamics has been up to? Progress is happening faster than you think.
        • card_zero7 hours ago
          So, they released a new gripper with touch sensors.

          Then there's this article:

          https://rodneybrooks.com/why-todays-humanoids-wont-learn-dex...

          Talking about human hands having tens of thousands of receptors of several different types, and the difficulty of tasks like picking up a match, and the trouble with the project of learning dexterity by brute force.

      • pixl9718 hours ago
        I mean, even the human brain is broken up in different parts and the parts that do touch are insanely old compared to higher thinking. The LLM parts tell the robot parts the plate should go to the kitchen.
  • notepad0x9019 hours ago
    The AI "bubble" won't burst just like the "internet bubble" didn't burst.

    the dotcom bubble was a result of investors jumping on the hype train all at once and then getting off of it all at once.

    Yes, investors will eventually find another hype train to jump on, but unlike 2000, we have tons of more retail investors and AI is also not a brand new tech sector, it's built upon the existing well established and "too big to fail" internet/ecommerce infrastructure. Random companies slapping AI on things will fail but all the real AI use cases will only expand and require more and more resources.

    OpenAI alone just hit 800M MAU. That will easily double in a few years. There will be adjustments,corrections and adaptations of course but the value and wealth it generates is very real.

    I'm no seer, I can't predict the future but I don't see a massive popping of some unified AI bubble anytime soon.

    • dcminter18 hours ago
      The dot com crash was a thing though. The bubble burst. It's just that there was real value there so some of the companies survived and prospered.

      Figuring out which was which was absolutely not possible at the time. Not many people foresaw Sun Microsystems as being a victim and nor was it obvious that Amazon would be a victor.

      I wouldn't bet my life savings on OpenAI.

      • notepad0x9012 hours ago
        Google, Microsoft, Apple, the value it's generating for them alone is enough of a demand to refute the "bubble" claims. the hype bubble will surely burst, but people are using it everyday. This is like saying the "crack cocaine" bubble will burst once people hear about its side effects (or alcohol lol). A reduction, sure. But not a crash.

        I wouldn't be my life savings on OpenAI either FWIW.

    • ACCount3718 hours ago
      People don't have the grasp of just how insane that "800M MAU and still growing" figure is. CEOs would kill people with their own bare hands to get this kind of userbase.

      OpenAI has ~4B of revenue already, and they aren't even monetizing aggressively. Facebook has an infinite money glitch, and can afford to put billions in the ground in pursuit of moonshots and Zuck's own vanity projects. Google is Google, and xAI is Elon Musk. The most vulnerable frontier lab is probably Anthropic, and Anthropic is still backed by Amazon and, counterintuitively, also Google.

      At the same time: there is a glut of questionable AI startups, extreme failure rate is likely - but they aren't the bulk of the market, not by a long shot. The bulk of the "AI money" is concentrated at either the frontier labs themselves, or companies providing equipment and services to them.

      The only way I see for the "bubble to pop" is for multiple frontier labs to get fucked at the same time, and I just don't see that happening as it is.

      • hiq17 hours ago
        > they aren't even monetizing aggressively

        That's one of the issues with the current valuation: it's unclear to me how many of the 800M MAU will stick to ChatGPT once it monetizes more aggressively, especially if its competitors don't. How many use ChatGPT instead of Claude because the free version offers more? How many will just switch once it doesn't anymore?

        OpenAI is already at a 500B valuation; the numbers I could find indicate that this number grew 3x in a year. One can reasonably ask if there's some ceiling or if it can keep on growing indefinitely. Do we expect it to become more valuable than Meta or MSFT? Can they keep raising money at higher valuations? What happens if they can't anymore, given that they seem to rely on this even for their running costs, not even speaking about their investments to remain competitive? Would current investors be fine if the valuation is still 500B in a year, or would they try to exit in a panic sell? Would they panic even if the valuation keeps growing but at a more modest pace?

        • ACCount3717 hours ago
          My intuition is that ChatGPT has the mindshare of the "normal user" userbase, which is powerful. Ask Apple. You can still squander a lead like this, but not easily.

          GPT-5 was in no small part a "cost down update" for OpenAI - they replaced their default 4o with a more optimized, more lightweight option that they can serve at scale without burning a hole in their pockets. At the same time, their "top end" options for the power users willing to pay for good performance remain competitive.

          The entire reason why OpenAI is burning money is "their investments to remain competitive". Inference is profitable - R&D is the money pit. OpenAI is putting money into more infra, more research and more training runs.

  • mallowdram16 hours ago
    China has already won the game. We developed media and wealth management as invididualistic, they approached the problems institutionally and built infrastructure at the pivotal point renewables became affordable. They made an army of technical engineers, we scattered innovation in comp sci/VC acceleration/automation and sat on our asses as asset managers. AI works in secluded modules like robotics and revision. As a general tool it's navel gazing energy sink. AI was a lure we fell for and wasted money and energy while smart players like Apple and China knew their reality.
  • deadbabe19 hours ago
    I think we will enter a neo-Luddite era at some point post-AI boom where it suddenly becomes fashionable to live one’s life with simple retro style technology, and social networks and much of the internet will just become places for bitter old people to complain amongst themselves and share stupid memes. Social media was cool when it was more genuine, but it got increasingly fake, and now with AI it could reach peak-fake. If people want genuine, what is more genuine than the real world?

    It will become cool for you to become inaccessible, unreachable, no one knowing your location or what you’re doing. People might carry around little beeper type devices that bounce small pre-defined messages around on encrypted radio mesh networks to say stuff like “I’m okay” or “I love you”, and that’s it. Maybe they are used for contactless payments as well.

    People won’t really bother searching the web anymore they’ll just ask AI to pull up whatever information they need.

    The question is, with social media on the decline, with the internet no longer used for recreational purposes, what else are people going to do? Feels like the consumer tech sector will shrink dramatically, meaning that most tech written will be made to create “hard value” instead of soft. Think anything having to do with movement of data and matter, or money.

    Much of the tech world and government plans are built on the assumption that people will just continue using tech to its maximum utility, even when it is clearly bad for them, but what if that simply weren’t the case? Then a lot of things fall apart.

  • novok16 hours ago
    I think this article was written with AI, it has a that contrastive sentence signature.

    And if you can't think of what to do with massive amounts of matrix multiplying compute, that's pretty sad IMO. Not to mention the huge amount of energy demand probably creating a peace dividend in energy generation for decades to come.

    We have also gotten a lot of open models we wouldn't of had without the AI boom competition, not mention all the other interesting stuff coming out in the open model world.

    Typical pessimist drivel.

  • cactusplant737420 hours ago
    > Most of the money is being spent on incredibly expensive GPUs that have a 1-3 year lifespan due to becoming obsolete quickly and wearing out under constant, high-intensity use.

    How about chips during the dotcom period? What was their lifespan?

    • bc569a80a344f9c20 hours ago
      That’s irrelevant because the primary assets left after the dotcom bubble was fiber in the ground with a lifetime measure in decades.
      • wmf16 hours ago
        After the dotcom crash there was also a flood of used Cisco routers, Sun servers, and Aeron chairs and people were very happy to buy them cheap. The same thing will happen again.
      • 20 hours ago
        undefined
      • cactusplant737420 hours ago
        The author of the article is comparing past and present. It's not irrelevant to the article.
        • bc569a80a344f9c20 hours ago
          The whole point of the article is that the dotcom era produced long term assets that stayed valuable for decades after the bubble burst, and argues that the AI era is producing short term assets that won’t be of much use if the bubble bursts.
          • cactusplant737418 hours ago
            But the chips of the dotcom era were not long-term assets. The author appears to claim that.
            • bc569a80a344f9c18 hours ago
              Where? I scanned the article again. I can’t seem to find that.
    • stanac20 hours ago
      More or less every high-end hardware becomes obsolete, or in other words becomes second class. First difference is that at least networking hardware could be used for years, compute/storage servers became obsolete faster than networking. Second is scale. Google summary says that current investments are 17x greater than dot com investments. It may be wrong about the number but investments are at least on an order of magnitude larger.

      Maybe in next decade we will have cheap gaming cloud offerings built on repurposed GPUs.

      • pixl9718 hours ago
        > It may be wrong about the number but investments are at least on an order of magnitude larger.

        Which is exactly what we expect if technological efficiency is increasing over time. Saying we've invested 1000x in aluminum plants and research over the first steel plants means we've had massive technological growth since then. It's probably better that it's actually moving around in an economy than just being used to consolidate more industries.

        >compute/storage servers became obsolete faster than networking

        In the 90s extremely rapidly. In the 00s much less rapidly. And by the 10's servers and storage especially the solid components like boards lasted a decade or more. The reason the servers became obsolete in the 90s is much faster units came out fast and were much faster, not that the hardware died. In the 2010-2020 era I repurposed tons of data center hardware to onsite computers for small businesses. I'm guessing a whole lot less of that hardware 'went away' then you'd expect.

  • Havoc19 hours ago
    The demand for tokens isn't going anywhere, so the hardware will be used.

    ...whether it is profitable is another matter

  • dbg3141520 hours ago
    I hope we'll get back to building things that actually matter -- solutions that help real people; products that are enjoyable to use, and are satisfying to build.

    As the noise fades, and with luck, the obsession with slapping "AI" on everything will fade with it. Too many hype-driven CEOs are chasing anything but substance.

    Some AI tools may survive because they're genuinely useful, but I worry that most won't be cost-effective without heavy subsidies.

    Once the easy money dries up, the real engineers and builders will still be here, quietly making things that work.

    Altman's plea -- "Come on guys, we just need a few trillion more!" -- and that error-riddled AI slide deck will be the meme that marks the top of the market.

    • novaRom20 hours ago
      Democracy, personal freedoms, and rule of law are things that matter, but I am afraid we cannot get back to them quickly without significant efforts. We need first to get back to sanity. In authoritarian society AI is a tool of control, do we want it to be like that?
  • mrcwinn20 hours ago
    Without moralizing or assuming the worst intentions of oligarchs, globalists, evil capitalists, and so on, I still don’t understand how a consumption based economy continues to fund the build out (oil->Saudi Arabia->LPs->OpenAI) when the technology likely removes the income of its consumers. Help me understand.
    • dotnet0019 hours ago
      It looks like they're planning on funding it through circular purchase agreements and looting the rest of the world.
      • mrcwinn15 hours ago
        All of “circular” those bets only work if there are real purchase orders and real revenue. In fact the two major deals are literally tied to that.

        So it doesn’t answer my question. Real GPUs are bought. Presumably because real consumption is taking place. Presumably because real value (productivity) is produced. Which in turn reduces knowledge work labor (maybe?). Which may destroy jobs. Which reduces excess income and consumption in… a consumer-driven economy.

        My point is, it’s actually not rational for the worst actors you could imagine. There’s a link in the chain missing for me logically and I think “billionaires” actually isn’t the answer.

        • kcexn4 hours ago
          The theoretical link that is missing is that AI isn't supposed to reduce knowledge work, it is supposed to free labor to concentrate on higher value work and investment.

          Nobody knows what that work is though, and nobody wants to talk about it lest everyone realizes this all might be a house of cards.

    • tim33317 hours ago
      Either it doesn't remove the income of its consumers like it's not doing at the moment, or we get universal basic income or something like that.
      • mrcwinn15 hours ago
        And how is UBI funded? Higher corporate taxes on (over time, as gross margins improve from inference cost deductions) more profitable companies?

        Or rather than higher taxes, more money printing because inflation is just so incredibly low because of advancements in productivity.

        That’s the only thing I can imagine so far. All other paths in my mind lead me thinking about uprising and political unrest.

        (I’m not a doomer. I’m literally just trying to imagine how it will work structurally.)

        • tim3337 hours ago
          I think either productivity advances and we get money printing/tax, or it doesn't and things continue as now.
    • throw23423423412 hours ago
      This one is easy. The new consumers will be capital holders, not wage holders as the bottleneck to production becomes energy, resources, land, capital, etc rather than labor. Capitalism rewards scarcity and inefficiency/bottlenecks via a higher price per unit (i.e. higher profits), the efficient at a macro level get optimized away -> value delivered can be negative correlated to price as supply overwhelms demand. You can see it already happening in economic statistics and if you think hard in general experience comparing to previous decades (e.g. previously it was the personal computer that was seen as the big market, now it is SaaS, B2B and high net work individuals). There are whole reports, and market commentary on this trend - it is a result of wealth inequality and firms trying to sell to the people that are left with capital and scarce required desired resources - this has been happening outside of the AI space as well. Rich people as a market size compared to others has only been growing, and AI should accelerate this trend.

      Its a big reason why there is a decent possibility that AI is dystopian for the majority of poor to upper middle class people. There will still be a market for things that are scarce (i.e. not labor) but the other factors of productions such as land, resources, capital, etc. People derive more income/wealth from these will win in an AI world, people who rely on their skills/intelligence/etc will lose. Even with abundance; there's no reason to think you will have a share of that.

    • coderenegade19 hours ago
      Nothing in capitalism suggests that consumers have to be human. In fact, the whole enshittification trend suggests that traditional consumers are less economically relevant than they've ever been.
      • mrcwinn15 hours ago
        Actually you made me think of a fun scenario.

        A robot, who I will name Robort, over time becomes, say, 1/10th the price of human labor. But they do 10x the work quality and work 10x longer.

        In that scenario, you could pay them the same wage but produce significantly more economic value. The robot, who won’t care about material possessions or luxuries, could make purchases on behalf of a human - and that human, overworked in 2025 or jobless - has a significant quality of life improvement.

        Help, an economist or someone smarter, check my math.

        • coderenegade13 hours ago
          Yeah, I meant more in the sense of businesses being the primary consumers for everything, but maybe you're right. In the same way that everyone owns a car today, maybe everyone will own one or more robots that do what they otherwise would have done, and get paid on their behalf. I think it's unlikely because machines are a lot more fungible than people, and I don't see businesses offloading ownership of the means of production in that way unless you're also covering hardware and running costs. You would also have to compete with very large corps that will almost certainly own vast worker capacity in the form of frontier ai and robots.

          But that kind of gets back to my original point, which was that I think the vast majority of economic interaction will be business to business, not just in value (the way it is today) but also in volume. I.e. in the same way that everyone has a license, maybe every family also has a registered household business, for managing whatever assets they own. The time it takes for self hosted models to approach frontier model performance isn't huge, and maybe we see that filter in to households that are able to do decent work at a cheaper rate.

        • imtringued3 hours ago
          When you think about it for a moment, you've merely described the relationship between a parent and a child.

          Children (under the age of 18) are less productive and spend significantly less time doing anything that could be considered work and they don't pay their parents in any significant capacity.

      • mrcwinn15 hours ago
        So, okay, that’s interesting. Let’s play it out. Unilever buys a Facebook ad in hopes a robot will buy a razor and shave? And even if a robot needed to shave, what reason would a company have to pay it a wage where before it was only a capex?
        • coderenegade13 hours ago
          More likely businesses than robots, but robots will consume on behalf of the business that owns them, same as currently happens with procurement.

          If people don't have the money to pay for shavers, shavers either won't be made, or they'll be purchased and owned by businesses, and leased for some kind of servitude. I'm not sure what kind of repayment would work if AI and machines can replace humans for most labor. Maybe we're still in the equation, just heavily devalued because AI is faster and produces higher quality output.

          Alternatively, government might have to provide those things, funded by taxes from businesses that own machines. I think, realistically, this is just a return to slavery by another name; it's illegal to own people as part of the means of production, but if you have a person analog that is just as good, the point becomes moot.

          I think it gets scary if the government decides it no longer has a mandate to look after citizens. If we don't pay taxes, do we get representation?

    • antonvs20 hours ago
      Just channeling amoral billionaires here, so don’t shoot the messenger, but if everything is automated by machines that you control, you no longer need to farm humans for capital.

      Not saying that’s even remotely realistic over the next century, but it does seem to be how some of these people think. Excessive wealth destroys intelligence, it doesn’t enhance it, as countless examples show.

  • alganet20 hours ago
    One key difference in all of this is that people were not predicting the dotcom bubble, so there was a surplus left after it popped. It was a surprise.

    This AI bubble already has lots of people with their forks and knifes waiting to capitalize on a myriad of possible surpluses after the burst. There's speculation on top of _the next bubble_ and how it will form, even before this one pops.

    That is absolutely disgusting, by the way.

    • tim33317 hours ago
      >people were not predicting the dotcom bubble

      I don't know if you were there at the time but saying wow what a bubble was much of the conversation back then. I don't know if it was predicting it as saying gosh just look - you can take any company and put ".com" in the name and the stock goes up 5x, it's nuts.

      • alganet16 hours ago
        Was there anyone saying "wow, when this bubble bursts there will be infrastructure everywhere!"?

        It seems that in the current AI craze, some people stopped saying "it's nuts" and started saying "it will leave something nuts in its place!". As if the bubble had already burst!

        Do you understand better now what I am trying to say?

        • tim3337 hours ago
          Sort of.

          I don't remember "wow, there will be infrastructure everywhere!". It was kind of more will they hurry up and build more infrastructure as it was seriously bad in 1999. I only had slow unreliable dial up. General broadband availability didn't happen till about a decade later and I still don't have fiber where I am in central London.

          That's one difference - people wanted internet access, were willing to pay and there was a shortage. This time they've built more LLM stuff than people really want and they have to shove it into everything for free.

    • pixl9718 hours ago
      > of _the next bubble_ and how it will form

      This is how humans have worked with pretty much every area of expansion in at least the last 500 years and probably longer. It's especially noticeable now because the amount of excess capital in the world from technological expansion makes it very noticeable and a lot of the limitations we know of in physics have been ran into, so further work gets very expensive.

      If you want to stop the bubbles you have to pretty much end capitalism, if which capitalists will fight you about. If AI replaces human thinking and robots human labor that 'solves' the human capital problem but opens up a whole new field of dangerous new ones.

      • alganet17 hours ago
        No, that's not how capitalism works. Debt is not the same thing as a speculation bubble.
        • pixl9715 hours ago
          Capitalism is what capitalism does.
  • capestart4 hours ago
    [dead]
  • Bogna9 hours ago
    [dead]
  • maxglute20 hours ago
    AI chips and bespoke data centers are closer to tulips than rail or fiber in terms of deprecated assets. They're not fungible stranded assets with long shelf life or room for improvment. Bubble bursting also points to current model towards AI is inherently not economically viable, i.e. we'll still be inferencing off existing models but won't be pouring 100billions into improving them. TLDR much more all or nothing gambit than past infra booms.
    • tim33317 hours ago
      The Dutch tulip industry is doing fine.
      • maxglute13 hours ago
        Not tulip mania fine.
  • arisAlexis20 hours ago
    The singularity. I don't think most authors of articles like these understand what the AI build up is about, they think it's another fad tool.
    • f4uCL9dNSnQman hour ago
      It is totally irrational: let's invest in AI companies, on the off chance that Singularity happens and makes money and entire economics system obsolete.

      There is no winning scenario.

      • arisAlexisan hour ago
        It's not an off chance. Technology and innovation doesn't stop, only with nukes.
    • mjhay20 hours ago
      You might want to listen to what people besides Sam Altman and Ray Kurweil say, at least once in a while.
      • arisAlexisan hour ago
        That AI progress will stop because X? It's their normalcy bias (look it up) talking. There is no stopping.
      • tim33317 hours ago
        You don't need to listen to those guys. The basic idea is quite simple. If human brains are basically biological computers, as electronic computers steadily improve and some point they'll surpass the biological ones. That point roughly is the singularity or whatever you want to call it.

        It occured to me as a teenager and I wrote an essay on if for my uni admissions exam back in 1981 but it's not rocket science and the idea goes back at least to John von Neumann who came up with the 'singularity' term in the 50s.

    • gizmo68620 hours ago
      We've had AI booms before. In terms of capabilities, this one is following exactly the same trajectory. Human researchers come up with some sort of breakthrough improvement to AI methods, that results in exponential like growth of capability as we both solve the low hanging fruit the method offers at it, and scale up the compute and data available to the limits of what is useful for the method. Then, capabilities start to plateau, and there is a long tail of the new techniques being applied in specific situations as they get combined with domain specific tuning and architecture.

      We are well into this process already. Core chat capabilities have pretty much stalled out. But most of the attempts at application are still very thin layers over chat bots.

      • arisAlexisan hour ago
        Do you have any proof of progress plateau and any proof that we had been in the past in an exponential AI cycle or you just think that it is so?
      • akomtu17 hours ago
        IMO, a big reason for this stagnation is that many researchers who could push the AI further choose not to because they came to believe that AI will do no good for humanity. It would be odd to have intellect strong enough to advance AI and at the same time somehow to not see the consequences of such 'progress'.
    • abathologist20 hours ago
      Indeed, the critics can only be so critical because they are not convinced of the revealed truth that we are materializing machine god. How irrational.
      • dotnet0019 hours ago
        I'm hoping that it's sarcasm to be invoking a machine god while calling the non-believers irrational.
        • pixl9715 hours ago
          While the majority of people invoking the idea of a machine god are being irrational the idea in itself has at least some merit. Some animals are more intelligent than others. Humans far more intelligent than the vast majority of animals, and with tools and cooperation in large societies we live in an existence they are not equipped to comprehend. This invites the question of what are the limits of intelligence. Is it far beyond our capabilities? If so, what does that look like to us.
    • wmf16 hours ago
      Even if you believe in the Singularity, there will be a crash if investors stop investing before it hits.
      • arisAlexisan hour ago
        Why would they stop investing if they think correctly that AI progress is unstoppable