AI data centers should be operated according to two limiting factors.
1) No energy from grid. Can't use coal or fossil fuel energy sources. Must have plan to provide excess TO grid.
2) No use of fresh water from municipal or fresh groundwater for cooling. Can use waste water. Must transition to providing excess fresh water to common supply.
No loopholes. Massive penalties for use of loopholes or breaking rules, not limited to but including complete shutdown of data center.
Those two limits will spur innovation AND prevent AI being criticized for energy use. These rules would hard burn improvements in energy storage and renewables as well as other methods of energy production.
Give them five years to comply to some useful progress percentage. Plenty of time to come up with a transition plan and show sufficient progress to justify further extensions. Realistically it will take 20 years at least to fully realize this plan.
Don't bring up cost. If you do, let me remind everyone that the climate change issue is real enough to hurt now. There's the very real cost of not pursuing these rules. AI has had plenty of time to bootstrap off grid. Now it can begin to migrate to something else instead.
Those with experience with energy generation will realize this plan has ridiculously high reward for those who follow it. Have your cake and eat it too definitely applies.
The point is, I don't see the logic in singling out data centers over anything else.
They are just firing up natural gas portable generators.
https://www.reuters.com/sustainability/boards-policy-regulat...
This is easy: the companies will simply build some nuclear power plants near to their data centers. Perhaps even nuclear power plants that are vibe-designed by their AIs. :-)
The hard part has never been the “what”, it’s always been the “how”.
For one thing, you're essentially mandating data centers to be colocated with power plants and waste water treatment plants, instead of these things each being located independently according to the requirements of their different functions. If that really leads to "ridiculously high reward", why isn't it being done already?
Altman is a man who is quickly running out of lies, so now he starts slinging random arguments that can't stand up to even the briefest of scrutiny.
OpenAI is burning cash and fuel. There are results, and they are, to some extend, impressive, but not impressive enough to justify the cost and Altman are no longer able to cover that up.
I am not so sure:
This decision rather tells something important about the priorities of the string-pullers behind the curtain:
They clearly want(ed) to monetize what is there, with the risk that only smaller improvements for the AI models will happen from OpenAI, and thus OpenAI might get outcompeted by competitors who are capable of building and running a much better model.
If this is the priority (no matter whether you like or despise Sam Altman), you will likely prefer Sam Altman over Ilya Sutskever.
If, on the other hand, a fast monetization is less important than making further huge leaps towards much better AI models, you will, of course, strongly prefer Ilya Sutskever over Sam Altman.
Thus, I wouldn't say that choosing Sam Altman over Ilya Sutskever is a sign that OpenAI is in trouble, but a very strong sign where the string-pullers behind the curtain want OpenAI to be. Both Sam Altman and Ilya Sutskever are just marionettes for these string pullers. When they have served their role, they get put back into the box.
You want to ride that out before capitalising on the eventual cheaper training costs once the rug has been pulled.
Altman has already succeeded here as it seems inference for API and chat is profitable but offset with massive R&D costs.
Their spending today has secured their compute for the near future.
If every GPU, stick for RAM and SSD is already paid for. Who can afford to sell cheap inference?
Z.ai is trying to deal with this by using domestic (basically Huwawei silicon not Nvidia). And with their state subsidy they will do well.
Anthropic has a 50bn USD plan to build data centres for 2026.
OpenAI similarly has secured extraordinary amounts of other people's money for data centres.
All these will be sunk costs and "other people's money" while money is easy to get hold off. But will be a moat when R&D ends.
Once all the models become basically the same who you go with will be who you're already with (mostly OpenAI), and who you end up with (say people who use Gemini because they have a Google 2TB account).
Some upstart can put themselves into the ground borrowing compute and selling at a loss but the moment they catch up and need to raise prices everyone will simply leave.
ChatGPT is what is most likely to remain a sustained frontier model. Maybe Claude jumps ahead further a few times, Gemini will have its moment. But it'll all be a wash with ChatGPT tittering along as rarely the best. But never the worst.
We are already at that point where we just don't fully know what to do with what we already have and simply haven't fully internalised it. But all it will take is one economic shakeup to redistribute human intelligence from what we are familiar with.
For those people, an AI better (much better?) than a coin toss is the goal, if it means not relying on people.
Personally, I already deal weekly with people that veemently antagonizes every line of thinking if it isn't what ChatGPT told them before a meeting.
Subscriptions are highly profitable for the typical chat user.
And API is overall net profitable.
What is extremely taxing to their finances is R&D, training and in particular development of frontier models.
My assessment is that when the music stops those who have the most subs will win.
Companies like Apple who had sat out the battle and built niche moats (privacy), and companies like OpenAI and Anthropic who have the market share will be fine.
In 6-12 months, nearly any lead they have will be eaten by distillation.
What will then happen is they will lose subscriptions to services which offer AI as a tack on like Gemini with Googles regular cloud subscriptions.
This will continue. Companies like Apple will have deep pockets to move on the businesses that go underwater and then can restart training in a much less congested market.
All this is assuming a relatively graceful collapse but that is what's likely given how aware everyone is that the bubble must pop.
Training costs will fall. Companies like Nvidia and other shovel businesses (i.e. selling GPUs and not using them) mostly have their revenue secured with funding from the present.
What I see as confirmations of this pattern is if we stop getting ground busting frontier models and then coast for 3-5 years when competition becomes more incremental.
This is an unpopular opinion, Will OpenAI go bust? No chance. Nor will Anthropic.
I'm not really sure what OpenAI's moat is. Anthropic has a chance being so widely accepted by developers, and being a bit better at developing models when it comes to code.
I sort of agree with you, not that it's the most subscriptions necessarily that will be the deciding factor, but the there's going to be some companies better positioned to survive when the free money stops. OpenAI has the brand, so that might help, but mostly I think they'll get absorbed into Microsoft. I don't think they can stand on their own. It doesn't seem like a particularly well managed company, so to me it makes more sense that they are simply acquired for pennies on the dollar by someone with better leadership.
It's certainly almost everyone today, but that's because enshitificarion has yet to start properly.
The risk to OpenAI is that their free tier are captured by the tack on markets (i.e. Gemini with 2TB of cloud storage).
But otherwise they will make free more annoying until people just buy the cheap tier and then move up from there. Like chatgpt Go.
I seriously doubt, at this moment, that OpenAI can come up with a offering good enough to entice people to pay for them when there will be other free to use services around. Google seems to be well positioned to eat their lunch.
For example in the UK's NHS, the worlds sixth largest employer is now fully committed to Microsoft 365. That's a lot of Copilot money if Microsoft sees it that way.
And OpenAI is funded via Microsoft, I also have a Microsoft 2TB subscription. And so do many people have both work based and personal home subscriptions.
It's a complete mess of a situation. If Microsoft moves away from GPT (it can since it's advertised under the copilot brand) OpenAI is dead in the water of course.
It's wrong, because it assumes that everything is about control.
For example, if I told you that a certain rich and powerful person was spending resources on sending vaccines to poor countries, you might think that was because they wanted to control things. If I said that someone sent books and teachers to a poor country, you might say they were trying to control people.
There's no way to have the conversation, in a conspiratorial mindset, about whether it's better or worse for humans or AI to do this stuff - because no matter what, the conspiratorial mindset will conclude that it's only about power for the humans involved, and always assume the worst. AND YET - there are things people can do which might be for their own self-gratification, but are definitely NOT as bad as some other things they could do. They hold back from doing the worst things.
That's why, I know this lens of looking at the world seems like it's the only smart way to understand things, but looking at the whole world through that lens prevents you from making the important distinction between OK, BAD and REALLY FUCKING BAD.
Nevermind hoarding 40% of the worlds silicone 'just because'.
It's like Musk's lofty and outright BS claims. "Well I'm rich so I can do and say what I like, give me your money".
When did bald face lies become the norm in business ?
Edit.. Im keeping silicone in there.
The banking sector committed fraud along the way, and early after Lehman collapsed, observers wondered aloud about the moral hazard of bailing out everyone without making an example of someone.
Worldcom had a different ending. Enron had a different ending. But Wells Fargo left 2008 with attitudes that tolerated widespread fraud.
When capital has nowhere to go, and the middle class become gamblers and speculators, and people who work become 'superfluous', lots of bad things start to happen. At least, that was my takeaway from "Origins of Totalitarianism".
[edit] To be more specific: Lying (and foreign wars, too) become viable business models when the moneyed class has so much to invest that they no longer know what to do with it, and lack any new markets to pry open, or the education or creativity to produce anything new of value that isn't extractive, and even the extractive methods of generating wealth have begun to dry up locally. Call it a bubble, call it fascism; fascism is basically just a way of keeping a bubble from collapsing indefinitely by pirating neighboring peoples' wealth and cannibalizing one's own society. So there's not a great difference between that and the stated vision of the major AI companies ATM.
In this context, "train" a human makes perfect sense.
>>It also takes a lot of energy to train a human
as a caveat to the energy it takes to train GPTs. The question I believe the writer is trying to ask is: Why is it better to train a GPT than a human?
Sorry, I'm not trying to be an asshole. I mean, I am one. But I'm not trying to be.
Yes, those people did get some control. Yes - they are like the bad versions of us.
We can be smarter. We'll win. Their dumb little utopias will collapse. Watch.
Tense means past/present/future. Case is for single/multiple pronouns.
I dropped out of school but I had a very mean grammar teacher ;)
It's the only way I'll learn :)
https://archive.org/details/dli.ernet.469826
"The Authoritarian Personality"
tl;dr - the roots of the authoritarian personality grow fertile in the desire to be free of 'the filth of others'. Altman seems like he'd go crazy if he didn't keep his machine spectacularly clean ..
https://harpers.org/archive/2026/03/childs-play-sam-kriss-ai...
and other things that are filtering to the mainstream non-technical intellectual readership that are flashing red lights about the personal nature of the people blowing air into this bubble, and that itself is significant.
Taking the distillation or "reader's digest" version of everything makes you more and more reliant on someone else's interpretation, and less capable of parsing the meaning of it yourself. And in the case of AI-generated work, there is no meaning in it to parse. It's just words and pictures. I love going to the movies and eating popcorn and watching dumb words and pictures. But being able to distinguish between enchanting words and pictures (Marvel movies) versus words and pictures that have meaning which you need to deduce or interpret for yourself is the beginning of being a fully realized conscious being.
For those that do believe there's a chance models are/will be conscious this precedent of "oh yeah they're conscious but we can just not put in the prompts that make it suffer lol" is pretty freaky.
edit: in case you hadn't seen it, this kind of stuff https://www.anthropic.com/research/end-subset-conversations
What shall we decide on the important matters of eating or drinking tonight fellow humans???
At this point I'm rooting for the AI models.