Unfortunately these companies are working to eliminate jobs, but not in any way making a path for a transition to a post-work society.
We are indeed entering a post-job-scarity environment though. You see a lot of ghost posting and lack of response for years now, 6 out of 10 application is ghosted, 2 out of 10 said no, and just a few remaining. Jobs are getting rarer and are going to be more of a status rather than for breadwinning
The tech dystopia doesn’t even try to flatter us by assuming we’re important enough to oppress individually.
Widespread job losses as a path to post-work are about as plausible as a car accident as a path to bringing a vehicle to a standstill. You would have to be from another planet (or a sociopath) not to understand that this violates boundary conditions that we implicitly want to leave intact.
They control how quickly they deploy, but I don't see how they have any control over the rest: "which industries they automate" is a function of how well the model has generalised. All the medical information, laws and case histories, all the source code, they're still only "ok"; and how are they, as a model provider in the US, supposed to cooperate (or not) with a trade union in e.g. Brandenburg whose bosses are using their services?
> Widespread job losses as a path to post-work are about as plausible as a car accident as a path to bringing a vehicle to a standstill.
Certainly what I fear.
Any given UBI is only meaningful if it is connected to the source of economic productivity; if a government is offering it, it must control that source; if the source is AI (and robotics), that government must control the AI/robots.
If governments wait until the AI is ready, the companies will have the power to simply say "make me"; if the governments step in before the AI is ready, they may simply find themselves out-competed by businesses in jurisdictions whose governments are less interested in intervention.
And even if a government pulls it off, how does that government remain, long-term, friendly to its own people? Even democracies do not last forever.
How so? Throwing out the term "UBI" every once in a while doesn't miraculously make it economically viable.
AI is taking jobs faster than making new ones!
No field is safe and trying to switch careers over 40 is almost impossible. Even flipping burgers is nearly impossible (very hard to do without pior experience at such age).
He’s clearly saying “lots of important things consume energy” not “let’s replace humans with GPUs” or “humans are wasteful too”.
If Altman is to blame for anything, it’s that AI is a scissor-generator extraordinaire.
1. He was speaking to a receptive audience. The head nods when he starts to make the comparison between the energy for bringing a human up to speed versus that for training an AI.
2. He is trying to rebut a _specific_ argument against his product, that it takes even more energy to do a task than a human does, once its training is priced in. He thinks that this is a fair comparison. The _fact_ that he thinks that this is a fair comparison is why I think it is too generous to say that this is just an offhand comment. Putting an LLM on an equal footing with a human, as if an LLM should have the same rights to the Earth as we do, is anti-human.
It also contains a rather glaring logical flaw that I would hope someone as intelligent as Altman should see. The human will be here anyway.
> It also contains a rather glaring logical flaw that I would hope someone as intelligent as Altman should see. The human will be here anyway.
Exactly. Perhaps in Altman's world, a human exists specifically to do tasks for him. But in reality, that human was always going to exist and was going to use those 20 years of energy anyway; they only happened to be employed by his rich ass when he wanted them to do a task. It's not equivalent to burning energy on training an LLM to do that task.
AFAIK CEOs jobs include to set vision.
This example sets a post human/less valuable human paradigm.
I don't see him calling for an LLM to have rights. I don't think this is part of how OpenAI considers its work at all. Anthropic is open-minded about the possibility, but OpenAI is basically "this is a thing, not a person, do not mistake it for a person".
> It also contains a rather glaring logical flaw that I would hope someone as intelligent as Altman should see. The human will be here anyway.
His point is flawed in other ways, like the limited competence of the AI and how even an adult human eating food for 20 years has an energy cost on the low end of the estimated energy cost to train a very small and very rubbish LLM, and nowhere near the energy cost of training one that anyone would care about. And even for those fancy models, they're only ok, not great, etc., and there are lots of models being trained rather than this being a one-time thing. Or in the other direction, each human needs to be trained separately and there's 8 billion of us. And what he says in the video doesn't help much either, it's vibes rather than analysis.
But your point here is the wrong thing to call a flaw.
The human is here anyway? First, no: *some* humans are here anyway, but various governments are currently increasing pension ages due to the insufficient number of new humans available to economically support people who are claiming pensions.
Second: so what if it was yes? That argument didn't stop us substituting combustion engines and hydraulics for human muscle.
Firstly, the math isn't even close. A human being consumes maybe 15 MWh of food energy from years 0 to 20. Modern frontier models take on the order of 100,000 MWh to train. It's a 10,000x difference. Furthermore, the human is actively doing 'inference' (living, acting, producing) during those 20 years of training and is also doings lots of non-brain stuff. Besides the energy math, it's comparing apples-to-oranges. A human brain doesn't start out as a blank slate; it has billions of years of evolutionary priors for language and spatial reasoning that LLMs have to teach themselves from scratch, so this could explain why a human can do some things cheaper. Also, the learning material available to a human is inherently created to be easily ingested by a human brain, whereas a blank LLM needs to build the capacity to process that data. Altman seems to hint at a comparison to the whole human evolution, but that seems unfair in the other direction, because humans and human evolution had to make discoveries from scratch and trial and error whereas LLMs get to ingest the final "good stuff". But either way you slice it, it's just not a good comparison, though not an 'inhuman' or immoral one.
Edit: Or perhaps more correctly, "less valuable human". Which is more appropriate?
In that light Altman saying things things like that is not really surprising. Contrary it only reinforces their desperation to me.
A human at rest used ~100Wh, up to 400Wh for an elite athlete under effort.
So 20 years at 200Wh (I'm being generous here) ends up being 35MW, still cheaper, and inference is still at under 200Wh!
Their idea of a person's value seems to be less than the communist soviets at this point, nothing but work units.
The K shaped recovery phenomenon demonstrated that the economy can continue to thrive, when consumption by the lowest earners is replaced and concentrated by earners at the top. This demonstrated to the elites that actually, we don't need as many consumers to grow the economy, and that it's possible to redistribute wealth upward without losing growth.
These public comments just show that the elites are more and more comfortable making it explicit that there are too many "useless eaters" in their opinion, and that the change has been from considering just the Third World to be where these "useless eaters" are while still preserving an imperial core, to now considering everyone that isn't them, regardless of First or Third world, to be a useless eater.
Very dangerous thinking, but at least it's out in the open now.
They want to capture the entire value of everyone's labor and hoard it for themselves, and discard the people that produced it.
Most charitably, it's a dumb thing to say. It compares two unrelated things if you see the value of human life to be more than just answering prompts. Less charitably, the argument is evil: if he was trying to make a sincere apples-to-apples comparison, it implies that he doesn't value human life beyond the labor his company can automate.
I can understand edgy teenagers making arguments like that on LessWrong forums, but Altman ought to know better. He either doesn't, or he sincerely believes what the comment implies.
I prefer Richard Brandson's worldview. He's rich, but seeing the way he talks about his late wife and her memory warms my heart. I envy him for the human parts of his life, not just the success.
His comparison devalues the basic value of a human life.
Why does it turn out they every single billionaire is also some combination of narcissist, pedophile, petty tyrant, or just utter freakazoid?
While I hope Warren Buffet isn't cut from the same cloth, the odds are looking quite bad. It would be nice to know there are some out there who can just be smart, get rich, and then NOT damn your immortal soul. But it's looking grim.
He's responding to all the people very upset about how much energy AI takes to train.
That said, a quick over-estimate of human "training" cost is 2500 kcal/day * 20 years = 21.21 MWh[0], which is on the low end of the estimates I've seen for even one single 8 billion parameter model.
[0] https://www.wolframalpha.com/input?i=2500+kcal%2Fday+*+20+ye...
https://en.wikipedia.org/wiki/Roko's_basilisk
Next to the might and terror of the machine God, mere humans are, individually, indeed as nothing...
Even a brief moment of thought should reveal that, even if you think the scenario likely, there are an infinite number of potential equivalent basilisks and you'd need to pick the correct one.
I'm less worried about Roko's basilisk*, and rather more worried about the people who say this:
I think you have said in fact, and I'm gonna quote, development of superhuman machine intelligence is probably the greatest threat to the continued existence of humanity. End quote. You may have had in mind the effect on, on jobs, which is really my biggest nightmare in the long term.
- https://www.techpolicy.press/transcript-senate-judiciary-sub...Because this is clearly not taking the words themselves at face value; either you should dig in and say "so why should we allow it at all then?" or you should dismiss it as "I think you're making stuff up, why should we believe you about anything?", but not misread such a blunt statement.
(If you follow the link, Altman's response is… not one I find satisfying).
* despite the people who do take it seriously, as such personalities have always been around and seldom cause big issues by themselves; only if AI gets competent enough to help them do this do they become a problem, but by that point hopefully it's also competent enough to help everyone stop them
Tell me something; have you ever built something you later regret having built? Like you look back at it, accept you did, but realize that if you'd just been a bit wiser/knowledgeable about the world you wouldn't have done it? In the moment you're doing the thing you'll regret, you don't know in that moment anything better to do until the unpleasant consequences manifest, granting you experience.
If you haven't experienced that yet; fine, but we shouldn't be betting on existential problems with "hopefully" if we can at all avoid it. Especially when that hopefully clause involves something we're making the decision to craft, with means and methods we don't fully understand/aren't predictively ahead of, and knowing that the way these methods work have a tendency to generate/provide the basis to generate a thoroughly sycophantic construct.
To your point, my P(doom) is 0.1, but the reason it's that low is that I expect a lot of people to use sub-threshold AI to do very dangerous things which render us either (1) unwilling or (2) unable to develop post-threshold AI.
The (1) case includes people actually taking this all seriously enough, which as per your final paragraph, I agree with you that people are currently not.
Things like Roko's basilisk are a strict subset of that 0.1; there's a lot of other dooms besides that one.
- A human uses between 100W (naked human eating 2000kcal/day) to 10kW (first-world per capita energy consumption).
- Frontier models need something like 1-10 MW-years to train.
- Inference requires .1-1kW computers.
So it takes thousands of human-years to train a single model, but they run at around the same wall clock power consumption as a human. Depending on your personal opinion, they are also .1-1000x as a productive as the median human in how much useful work (or slop) they can produce per unit time.
Therefore its value is infinite. Therefore Altman's hypothesis is toilet paper thin.
If you calculate 100W * 7 million years * 365 = 255,500MW to train.