41 pointsby puttycat6 hours ago22 comments
  • cors-fls5 hours ago
    The comparison only starts to make sense in a post-work society where there is no working-class, whose existence depends on working.

    Unfortunately these companies are working to eliminate jobs, but not in any way making a path for a transition to a post-work society.

    • stevefan19995 hours ago
      They are not eliminating job, you still have jobs in 1984 which is where we are heading to. You still need to hire someone to do the mass surveillance and policing, and enforcing the laws that are getting more and more draconian day by day. And you still need people to instigate-cough-motivate hate on something in order to keep the momentum of the society to shift the focus. Those still took labor but AI makes it easier.

      We are indeed entering a post-job-scarity environment though. You see a lot of ghost posting and lack of response for years now, 6 out of 10 application is ghosted, 2 out of 10 said no, and just a few remaining. Jobs are getting rarer and are going to be more of a status rather than for breadwinning

      • uncletaco4 hours ago
        It really sucks that everyone’s go to dystopia is 1984. Especially in this case given 1984 required the active participation of millions of citizens whereas Brave New World maps better where control is enforced through comfort and irrelevance instead of force.

        The tech dystopia doesn’t even try to flatter us by assuming we’re important enough to oppress individually.

    • squidbeak5 hours ago
      The elimination of jobs necessarily 'makes a path' to a post-work society. Post-work couldn't exist without it. Beyond that, it isn't in AI companies' power to shape economies and societies for post-work (which is what I assume you're really getting at here). All Altman, Amodei, Hassabis and the others can do is alert policymakers to what's coming, and they're trying pretty hard to do that, aren't they? - often in the teeth of the skepticism we see so much of on this site. Really if policy makers won't look ahead, the AI companies can't be blamed for the bumps we're going hit.
      • ahf8Aithaex7Nai4 hours ago
        Yes, these people are publicly warning about the risks of AI. Altman is promoting regulation that clearly favors OpenAI. This is called regulatory capture. It aims to strengthen one's own position. Furthermore, the claim that these companies cannot shape economies is simply false. They decide how quickly they deploy, which industries they automate, whether they cooperate with unions, etc. These are all decisions that shape the economy.

        Widespread job losses as a path to post-work are about as plausible as a car accident as a path to bringing a vehicle to a standstill. You would have to be from another planet (or a sociopath) not to understand that this violates boundary conditions that we implicitly want to leave intact.

        • ben_w3 hours ago
          > They decide how quickly they deploy, which industries they automate, whether they cooperate with unions, etc. These are all decisions that shape the economy.

          They control how quickly they deploy, but I don't see how they have any control over the rest: "which industries they automate" is a function of how well the model has generalised. All the medical information, laws and case histories, all the source code, they're still only "ok"; and how are they, as a model provider in the US, supposed to cooperate (or not) with a trade union in e.g. Brandenburg whose bosses are using their services?

          > Widespread job losses as a path to post-work are about as plausible as a car accident as a path to bringing a vehicle to a standstill.

          Certainly what I fear.

          Any given UBI is only meaningful if it is connected to the source of economic productivity; if a government is offering it, it must control that source; if the source is AI (and robotics), that government must control the AI/robots.

          If governments wait until the AI is ready, the companies will have the power to simply say "make me"; if the governments step in before the AI is ready, they may simply find themselves out-competed by businesses in jurisdictions whose governments are less interested in intervention.

          And even if a government pulls it off, how does that government remain, long-term, friendly to its own people? Even democracies do not last forever.

      • jplusequalt5 hours ago
        >they're trying pretty hard to do that, aren't they

        How so? Throwing out the term "UBI" every once in a while doesn't miraculously make it economically viable.

        • squidbeak4 hours ago
          Do you really pay so little attention to the space that you think this is all they do? Almost every public discussion or interview involving these figures turns at some point to society's unpreparedness for what's coming, for instance Amodei's interview last week.

          https://www.dwarkesh.com/p/dario-amodei-2

    • iberator5 hours ago
      This!

      AI is taking jobs faster than making new ones!

      No field is safe and trying to switch careers over 40 is almost impossible. Even flipping burgers is nearly impossible (very hard to do without pior experience at such age).

    • UltraSane5 hours ago
      They ARE, just the post-work society is limited to the people who own the AIs
  • erulabs5 hours ago
    Big fumble to be unaware how this offhand comment would be taken out of context.

    He’s clearly saying “lots of important things consume energy” not “let’s replace humans with GPUs” or “humans are wasteful too”.

    If Altman is to blame for anything, it’s that AI is a scissor-generator extraordinaire.

    • throwyawayyyy5 hours ago
      I haven't watched the whole interview. In the clip, a couple of things jump out:

      1. He was speaking to a receptive audience. The head nods when he starts to make the comparison between the energy for bringing a human up to speed versus that for training an AI.

      2. He is trying to rebut a _specific_ argument against his product, that it takes even more energy to do a task than a human does, once its training is priced in. He thinks that this is a fair comparison. The _fact_ that he thinks that this is a fair comparison is why I think it is too generous to say that this is just an offhand comment. Putting an LLM on an equal footing with a human, as if an LLM should have the same rights to the Earth as we do, is anti-human.

      It also contains a rather glaring logical flaw that I would hope someone as intelligent as Altman should see. The human will be here anyway.

      • nozzlegear5 hours ago
        > 2. He is trying to rebut a _specific_ argument against his product, that it takes even more energy to do a task than a human does, once its training is priced in. He thinks that this is a fair comparison. The _fact_ that he thinks that this is a fair comparison is why I think it is too generous to say that this is just an offhand comment. Putting an LLM on an equal footing with a human, as if an LLM should have the same rights to the Earth as we do, is anti-human.

        > It also contains a rather glaring logical flaw that I would hope someone as intelligent as Altman should see. The human will be here anyway.

        Exactly. Perhaps in Altman's world, a human exists specifically to do tasks for him. But in reality, that human was always going to exist and was going to use those 20 years of energy anyway; they only happened to be employed by his rich ass when he wanted them to do a task. It's not equivalent to burning energy on training an LLM to do that task.

      • ncr1005 hours ago
        Is Altman a scientist? I trust scientists to make fine grained arguments!

        AFAIK CEOs jobs include to set vision.

        This example sets a post human/less valuable human paradigm.

      • ben_w3 hours ago
        > as if an LLM should have the same rights to the Earth as we do,

        I don't see him calling for an LLM to have rights. I don't think this is part of how OpenAI considers its work at all. Anthropic is open-minded about the possibility, but OpenAI is basically "this is a thing, not a person, do not mistake it for a person".

        > It also contains a rather glaring logical flaw that I would hope someone as intelligent as Altman should see. The human will be here anyway.

        His point is flawed in other ways, like the limited competence of the AI and how even an adult human eating food for 20 years has an energy cost on the low end of the estimated energy cost to train a very small and very rubbish LLM, and nowhere near the energy cost of training one that anyone would care about. And even for those fancy models, they're only ok, not great, etc., and there are lots of models being trained rather than this being a one-time thing. Or in the other direction, each human needs to be trained separately and there's 8 billion of us. And what he says in the video doesn't help much either, it's vibes rather than analysis.

        But your point here is the wrong thing to call a flaw.

        The human is here anyway? First, no: *some* humans are here anyway, but various governments are currently increasing pension ages due to the insufficient number of new humans available to economically support people who are claiming pensions.

        Second: so what if it was yes? That argument didn't stop us substituting combustion engines and hydraulics for human muscle.

      • 5 hours ago
        undefined
    • MattDaEskimo5 hours ago
      The problem is that he is now beginning to make comparisons of AI versus Humans, as in it's a competition more than an augmentation.
    • YurgenJurgensen5 hours ago
      For people rich enough to have dedicated PR staff talking in their field of expertise, there’s no such thing as an offhand comment.
    • xnx5 hours ago
      Great to talk about choices in terms of comparison, but this was a really stupid delivery.
  • accounting20264 hours ago
    I didn't read/hear it as reducing human life to 'training energy', but I don't like the comparison at the technical level.

    Firstly, the math isn't even close. A human being consumes maybe 15 MWh of food energy from years 0 to 20. Modern frontier models take on the order of 100,000 MWh to train. It's a 10,000x difference. Furthermore, the human is actively doing 'inference' (living, acting, producing) during those 20 years of training and is also doings lots of non-brain stuff. Besides the energy math, it's comparing apples-to-oranges. A human brain doesn't start out as a blank slate; it has billions of years of evolutionary priors for language and spatial reasoning that LLMs have to teach themselves from scratch, so this could explain why a human can do some things cheaper. Also, the learning material available to a human is inherently created to be easily ingested by a human brain, whereas a blank LLM needs to build the capacity to process that data. Altman seems to hint at a comparison to the whole human evolution, but that seems unfair in the other direction, because humans and human evolution had to make discoveries from scratch and trial and error whereas LLMs get to ingest the final "good stuff". But either way you slice it, it's just not a good comparison, though not an 'inhuman' or immoral one.

  • ncr1005 hours ago
    Post human thinking by the CEO is not helping me feel comfortable with the Vision setting going on for Open AI.

    Edit: Or perhaps more correctly, "less valuable human". Which is more appropriate?

    • thenthenthen4 hours ago
      Good question. It sounds like post-humanism, which, even in like left art circles was considered ‘interesting’ ten years ago (like post-antroposcene). These are not very useful terms so appreciate the nuance of ‘less valuable human’. It is not so catchy though, maybe we need to dig deeper. I am sure this has been discussed before.
  • rspoerri5 hours ago
    How many people is he willing to let starve for the sake of his ego power and wealth?
    • morkalork2 hours ago
      It's, okay, we can just eat cake instead!
      • 44 minutes ago
        undefined
  • mhher3 hours ago
    To me the whole OpenClaw situation is proof enough how desperate OpenAI must be for fresh (real, non-circular) cash.

    In that light Altman saying things things like that is not really surprising. Contrary it only reinforces their desperation to me.

  • juancn3 hours ago
    An AI model takes about 100 to 150 MW to be trained.

    A human at rest used ~100Wh, up to 400Wh for an elite athlete under effort.

    So 20 years at 200Wh (I'm being generous here) ends up being 35MW, still cheaper, and inference is still at under 200Wh!

    • xnx3 hours ago
      How much energy does it take to feed, clothe, house, entertain, and transport that human to 18? Probably $500K worth.
      • politelemon3 hours ago
        How much does it take to build data centers to house the inferencing, and an the involved logistics, infrastructure setup, bribery, marketing, and organisational structure behind it. Easily in the hundred billions.
      • robotpepi2 hours ago
        > Probably $500K worth.

        What life standards do you have!?

      • an hour ago
        undefined
    • _DeadFred_an hour ago
      The reductionism and comparison of a human life to a corporate product is disgusting but it's valuable to see how they truly see the world they are creating.

      Their idea of a person's value seems to be less than the communist soviets at this point, nothing but work units.

  • sc68cal4 hours ago
    I think this reveals a great deal about the thinking of the ruling elites.

    The K shaped recovery phenomenon demonstrated that the economy can continue to thrive, when consumption by the lowest earners is replaced and concentrated by earners at the top. This demonstrated to the elites that actually, we don't need as many consumers to grow the economy, and that it's possible to redistribute wealth upward without losing growth.

    These public comments just show that the elites are more and more comfortable making it explicit that there are too many "useless eaters" in their opinion, and that the change has been from considering just the Third World to be where these "useless eaters" are while still preserving an imperial core, to now considering everyone that isn't them, regardless of First or Third world, to be a useless eater.

    Very dangerous thinking, but at least it's out in the open now.

    They want to capture the entire value of everyone's labor and hoard it for themselves, and discard the people that produced it.

    • an hour ago
      undefined
  • lich_king5 hours ago
    I see some folks here defending Altman because it was an off-the-cuff remark in front of a receptive audience. But why does this make the comment acceptable? Would you give me if I talked about eating babies, but defended myself by saying that I was speaking to a receptive audience?

    Most charitably, it's a dumb thing to say. It compares two unrelated things if you see the value of human life to be more than just answering prompts. Less charitably, the argument is evil: if he was trying to make a sincere apples-to-apples comparison, it implies that he doesn't value human life beyond the labor his company can automate.

    I can understand edgy teenagers making arguments like that on LessWrong forums, but Altman ought to know better. He either doesn't, or he sincerely believes what the comment implies.

  • HeavyStorm2 hours ago
    So he's comparing a human being to AI, finally showing what our AI overlords think of humanity: we're just wasteful resources to be replaced by more efficiency tools.
  • an hour ago
    undefined
  • jethronethro4 hours ago
    Just more word salad from Altman.
  • jmfldn5 hours ago
    This is a profound category error. What Altman reduces to a 20-year 'training' cycle fueled by 'energy' is what we, in the actual world, call life. It is a stunningly hollow perspective that uses the language of industrial output to describe the human experience. While he is likely being provocative to keep his product at the center of the cultural conversation, it probably exposes something about him.
    • 4 hours ago
      undefined
    • csallen4 hours ago
      This is a super disingenuous take. He was very obviously making a specific point, not try express a perspective on the value of humanity.
      • jmfldn4 hours ago
        I understand he’s making a technical point about efficiency, but language isn't neutral and I think it betrays something deeper. It's such a glib and shallow point too that I think it should be called out since he has a track record of saying some incredibly shallow things about AI, people, politics, and everything really.
      • polotics4 hours ago
        The meaning of a message is what has been understood.
    • oulipo24 hours ago
      Exactly why we need to rid ourselves (by taxes) of billionaires. Those people have way too much power, and are often stupid dumbasses who just got rich randomly (right moment at the right place, or because their parents were rich in the first place), but are mostly spewing stupid lunacies
  • Fricken4 hours ago
    One could feed several hundred thousand kids to adulthood with for the cost of training OpenAIs biggest models.
  • DemocracyFTW22 hours ago
    "Why don't they eat cake?"
  • heliumtera5 hours ago
    Who cares about humans, it's 2026.

    We only care about pelicans riding bicycles

  • kylehotchkiss5 hours ago
    What a depressing view of life. I don't expect him to take on some religious or philosophical view, but come on, how could you grow up somewhere wonderful, start a successful company with a lot of people you probably like and enjoy working with, have enough money to buy an island and still summarize life like that.

    I prefer Richard Brandson's worldview. He's rich, but seeing the way he talks about his late wife and her memory warms my heart. I envy him for the human parts of his life, not just the success.

    • dk11384 hours ago
      Power just unequivocally screws up most people. This past year has really crystallized how few good leaders there are.
  • andsoitis6 hours ago
    [flagged]
    • eli5 hours ago
      I’m not sure it’s possible to conclude what hey actually believes from public statements. I do not trust him to tell the truth about anything related to AI.
    • iugtmkbdfil8345 hours ago
      To be fair, it is not just him. There is an entire caste of people across the organizations that see employees as a problem. It is absolutely fascinating to watch, because those people tend to be somewhere in management class and appear to derive a fair amount of happiness from said managing ( and we can argue whether those skills are any good ).
      • lokar5 hours ago
        This goes beyond just employees.

        His comparison devalues the basic value of a human life.

    • reactordev5 hours ago
      You would need empathy for that.
      • ahf8Aithaex7Nai4 hours ago
        Ethics would suffice. Or a basic humanistic education. Unfortunately, that is precisely what these people seem to lack.
    • dyauspitr5 hours ago
      Well if you consider the theoretical goal of a machine that has all the answers then you’d understand why he thinks that way.
    • atomicnumber35 hours ago
      Is it possible to become wealthy like this AND value human life?

      Why does it turn out they every single billionaire is also some combination of narcissist, pedophile, petty tyrant, or just utter freakazoid?

      • while_true_5 hours ago
        Top philanthropists include Jamsetji Tata (donated $102.4 billion), Bill and Melinda Gates ($75.8 billion), Warren Buffett (is pledging to donate 99% of his wealth). Andrew Carnegie gave away 85% of his wealth -- including construction of over 2,500 public libraries.
        • grogenaut5 hours ago
          Carnegie did that to white wash his public opinion while he worked his workers non stop and to mutilation or death. When are you going to the library when you work 996 or more?
        • WillAdams5 hours ago
          Gates took more than he gave, for example:

          https://www.folklore.org/MacBasic.html

        • saulpw5 hours ago
          Only one billionaire has ever given away enough money while he was alive to not be a billionaire. Ever. Pledges don't count. Also Warren Buffet giving away 99% of his wealth still keeps him a billionaire.
          • lokar4 hours ago
            Chouinard?
            • saulpw2 hours ago
              Chuck Feeney

              Also I stand corrected, Chouinard is the other instance.

              • lokaran hour ago
                Still very rare
        • nozzlegear5 hours ago
          Bill Gates is not a great example given the recent revelations surrounding him and the nature of his divorce in the latest batch of the Epstein files.
          • atomicnumber35 hours ago
            Yes. Colors his philanthropy.

            While I hope Warren Buffet isn't cut from the same cloth, the odds are looking quite bad. It would be nice to know there are some out there who can just be smart, get rich, and then NOT damn your immortal soul. But it's looking grim.

            • lokar4 hours ago
              Experience would point to extreme wealth changing almost everyone who gains it, for the worse.
        • ossa-ma5 hours ago
          [flagged]
      • bombcar4 hours ago
        Because most people who are not some combination of the above tap out somewhere around the $100m-$500m mark or earlier, because they don't have any reason to get more.
      • samename5 hours ago
        Power corrupts the mind. They live in a different world
    • ben_w5 hours ago
      He may well be as you say, but nothing in this video is evidence of that. To the extent he's a slimy sociopath, he's not openly twirling his metaphorical moustache here, and he's a lot better at hiding villainy than most of the better-known slimy sociopaths in the world today (for comparison, Musk actually tweeted "If this works, I’m treating myself to a volcano lair. It’s time.", this isn't even at that level.

      He's responding to all the people very upset about how much energy AI takes to train.

      That said, a quick over-estimate of human "training" cost is 2500 kcal/day * 20 years = 21.21 MWh[0], which is on the low end of the estimates I've seen for even one single 8 billion parameter model.

      [0] https://www.wolframalpha.com/input?i=2500+kcal%2Fday+*+20+ye...

    • bitwize5 hours ago
      The AI "movement" is hermetic magick. The goal is to bring about God in silico, because if you're not involved in so doing, God may punish you for eternity when he emerges:

      https://en.wikipedia.org/wiki/Roko's_basilisk

      Next to the might and terror of the machine God, mere humans are, individually, indeed as nothing...

      • ben_w5 hours ago
        Most of the people working on AI, and even those on the specific sub-domain of AI where Roko's basilisk was coined which isn't the majority of the field by a long shot, have been rolling their eyes at Roko's basilisk since the moment it was coined.

        Even a brief moment of thought should reveal that, even if you think the scenario likely, there are an infinite number of potential equivalent basilisks and you'd need to pick the correct one.

        I'm less worried about Roko's basilisk*, and rather more worried about the people who say this:

          I think you have said in fact, and I'm gonna quote, development of superhuman machine intelligence is probably the greatest threat to the continued existence of humanity. End quote. You may have had in mind the effect on, on jobs, which is really my biggest nightmare in the long term.
        
        - https://www.techpolicy.press/transcript-senate-judiciary-sub...

        Because this is clearly not taking the words themselves at face value; either you should dig in and say "so why should we allow it at all then?" or you should dismiss it as "I think you're making stuff up, why should we believe you about anything?", but not misread such a blunt statement.

        (If you follow the link, Altman's response is… not one I find satisfying).

        * despite the people who do take it seriously, as such personalities have always been around and seldom cause big issues by themselves; only if AI gets competent enough to help them do this do they become a problem, but by that point hopefully it's also competent enough to help everyone stop them

        • salawat5 hours ago
          >only if AI gets competent enough to help them do this do they become a problem, but by that point hopefully it's also competent enough to help everyone stop them

          Tell me something; have you ever built something you later regret having built? Like you look back at it, accept you did, but realize that if you'd just been a bit wiser/knowledgeable about the world you wouldn't have done it? In the moment you're doing the thing you'll regret, you don't know in that moment anything better to do until the unpleasant consequences manifest, granting you experience.

          If you haven't experienced that yet; fine, but we shouldn't be betting on existential problems with "hopefully" if we can at all avoid it. Especially when that hopefully clause involves something we're making the decision to craft, with means and methods we don't fully understand/aren't predictively ahead of, and knowing that the way these methods work have a tendency to generate/provide the basis to generate a thoroughly sycophantic construct.

          • ben_w4 hours ago
            Sure.

            To your point, my P(doom) is 0.1, but the reason it's that low is that I expect a lot of people to use sub-threshold AI to do very dangerous things which render us either (1) unwilling or (2) unable to develop post-threshold AI.

            The (1) case includes people actually taking this all seriously enough, which as per your final paragraph, I agree with you that people are currently not.

            Things like Roko's basilisk are a strict subset of that 0.1; there's a lot of other dooms besides that one.

      • cedws5 hours ago
        Sci-fi mumbo jumbo.
  • add-sub-mul-div5 hours ago
    Real "This must hit so hard if you're stupid" moment.
  • drcongo5 hours ago
    He really is a total piece of shit isn't he.
    • dk11385 hours ago
      He has proved it over and over and over again.
  • 5 hours ago
    undefined
  • sxp5 hours ago
    To add some math to the discussion:

    - A human uses between 100W (naked human eating 2000kcal/day) to 10kW (first-world per capita energy consumption).

    - Frontier models need something like 1-10 MW-years to train.

    - Inference requires .1-1kW computers.

    So it takes thousands of human-years to train a single model, but they run at around the same wall clock power consumption as a human. Depending on your personal opinion, they are also .1-1000x as a productive as the median human in how much useful work (or slop) they can produce per unit time.

    • ncr1005 hours ago
      The math is simpler, 1 human is irreplaceable by AI.

      Therefore its value is infinite. Therefore Altman's hypothesis is toilet paper thin.

      • thenthenthen4 hours ago
        I remember when toilet paper was like ddr5
    • cheeseblubber5 hours ago
      The human brain also is a product of billions of years of evolution. We branched off from our common ancestor 7-9 million years ago. We encode quite a lot of structure and information that is essential for intelligence. The starting point of just our life time of training is incomplete.

      If you calculate 100W * 7 million years * 365 = 255,500MW to train.

      • arcticbull4 hours ago
        If you really want to go down that path then AI's are the product of human ingenuity and labor so you have to amortize all of that into AI training. Then numbers become pretty meaningless very quickly. That sand didn't up and start thinking on its own you know.
      • grogenaut4 hours ago
        That's the NRE of getting to where we are and having these llms