Maybe we need to start thinking less about building tests for definitively calling an LLM AGI and instead deciding when we can't tell humans aren't LLMs for declaring AGI is here.
That's not the definition they have been using. The definition was "$100B in profits". That's less than the net income of Microsoft. It would be an interesting milestone, but certainly not "most of the jobs in an economy".
It ties the definition to economic value, which I think is the best definition that we can conjure given that AGI is otherwise highly subjective. Economically relevant work is dictated by markets, which I think is the best proxy we have for something so ambiguous.
Deep scientific discoveries are also cognitively demanding, but are not really valued (see the precarious work environment in academia).
Another point: a lot of work is rather valued in the first place because the work centers around being submissive/docile with regard to bullshit (see the phenomenon of bullshit jobs). You really know better, but you have to keep your mouth shut.
And then I think coming up with the right metric is just as subjective on this field as the technological one.
e.g. average cost to complete a set of representative tasks
Huh. Source? I mean, typical OpenAI bullshit, but would love to know how they defined it.
Wow. Maybe they spelled it out as aggregate gross income :P.
Apple, Alphabet, Amazon, NVIDIA, Samsung, Intel, Cisco, Pfizer, UnitedHealth , Procter & Gamble, Berkshire Hathaway, China Construction Bank, Wells Fargo, ...
A self-running massive corporation with no people that generates billions in profit, no matter what you call it, would completely upend all previous structural assumptions under capitalism
That's a relevent aspect of the AGI concept.
[0] https://techcrunch.com/2024/12/26/microsoft-and-openai-have-...
"OpenAI has only achieved AGI when it develops AI systems that can generate at least $100 billion in profits."
Given that the definition of AGI is beyond meaningless, it is clear that the "I" in AGI stands for IPO.
[0] https://finance.yahoo.com/news/microsoft-openai-financial-de...
if you think drone targeting in Ukraine is scary now, wait until AGI is on it...
ditto for exploiting vulns via mythos
I don't think your original comment deserve to be downvoted. (Calling someone illiterate, on the other hand.)
But the "it" I was asking about was "AGI" as "an economical thing." You technically correctly answered how OpenAI defines AGI in public, i.e. with no reference to profits. But it did not address the economic definition OP initially alluded to.
For what it's worth, I could have been clearer in my ask.
But originally I was just trying to be helpful by quoting their charter on what they consider "agi" now.
I don't get why HN commenters find this so hard to understand. I have a sense they are being deliberately obtuse because they resent OpenAI's success.
From Wikipedia
Eschatology (/ˌɛskəˈtɒlədʒi/; from Ancient Greek ἔσχατος (éskhatos) 'last' and -logy) concerns expectations of the end of present age, human history, or the world itself.
I'm case anyone else is vocabulary skill checked like me
Russian Invasion - Salami Tactics | Yes Prime Minister
People obviously have really strong opinions on AI and the hype around investments into these companies but it feels like this is giving people a pass on really low quality discourse.
This source [1] from this time last year says even lab leaders most bullish estimate was 2027.
[1]. https://80000hours.org/2025/03/when-do-experts-expect-agi-to...
They can. If one consolidated the AI industry into a single monopoly, it would probably be profitable. That doesn't mean in its current state it can't succumb to ruionous competition. But the AGI talk seems to be mostly aimed at retail investors and philospher podcasters than institutional capital.
"With viable economics" is the point.
My "ludicrous statement" is a back-of-the-envelope test for whether an industry is nonsense. For comparison, consolidating all of the Pets.com competitors in the late 1990s would not have yielded a profitable company.
Do you argue in good faith?
There’s a difference between being too early vs being nonsense.
Not in the 1990s. The American e-commerce industry was structurally unprofitable prior to the dot-com crash, an event Amazon (and eBay) responded to by fundamentally changing their businesses. Amazon bet on fulfillment. eBay bet on payments. Both represented a vertical integration that illustrates the point–the original model didn't work.
> There’s a difference between being too early vs being nonsense
When answering the question "do the investments make sense," not really. You're losing your money either way.
The American AI industry appears to have "viable economics for profit" without AGI. That doesn't guarantee anyone will earn them. But it's not a meaningless conclusion. (Though I'd personally frame it as a hypothesis I'm leaning towards.)
OP did not include this requirement in their post because doing so would make the claim trivially true.
Your position is a tautology given there is no (and likely will never be) collectively agreed upon definition of AGI. If that is true then nobody will ever achieve anything like AGI, because it’s as made up of a concept as unicorns and fairies.
Is your position that AGI is in the same ontological category as unicorns and Thor and Russell’s teapot?
Is there’s any question at this point that humans won’t be able to fully automate any desired action in the future?
Other people just call it "theft".
At the very least, Ilya Sutskever genuinely believed it, even when they were just making a DOTA bot, and not for hype purposes.
I know he's been out of OpenAI for a while, but if his thinking trickled down into the company's culture, which given his role and how long he was there I would say seems likely, I don't think it's all hype.
Grand delusion, perhaps.
Definitely interesting to watch from the perspective of human psychology but there is no real content there and there never was.
The stuff around Mythos is almost identical to O1. Leaks to the media that AGI had probably been achieved. Anonymous sources from inside the company saying this is very important and talking about the LLM as if it was human. This has happened multiple times before.
Seems more like an incredibly embarrassing belief on his part than something I should be crediting.
He doesn't need to be right but it's not crazy at all to look at super human performance in DOTA and think that could lead to super human performance at general human tasks in the long run
We already have several billion useless NGI's walking around just trying to keep themselves alive.
Are we sure adding more GI's is gonna help?
...just please stop burning our warehouses and blocking our datacenters.
Isn't this tautology? We've de facto defined AGI as a "sufficiently complex LLM."
However, I don't think it is even true. LLMs may not even be on the right track to achieving AGI and without starting from scratch down an alternate path it may never happen.
LLMs to me seem like a complicated database lookup. Storage and retrieval of information is just a single piece of intelligence. There must be more to intelligence than a statistical model of the probable next piece of data. Where is the self learning without intervention by a human. Where is the output that wasn't asked for?
At any rate. No amount of hype is going to get me to believe AGI is going to happen soon. I'll believe it when I see it.
And how will you know AGI when you saw it?
If you present GPT 5.5 to me 2 years ago, I will call it AGI.
Now our idea of what qualifies as AGI has shifted substantially. We keep looking at what we have and decide that that can't possibly be AGI, our definition of AGI must have been wrong
In some sense, this isn't really different than how society was headed anyways? The trend was already going on that more and more sections of the population were getting deemed irrational and you're just stupid/evil for disagreeing with the state.
But that reality was still probably at least a century out, without AI. With AI, you have people making that narrative right now. It makes me wonder if these people really even respect humanity at all.
Yes, you can prod slippery slope and go from "superintelligent beings exist" to effectively totalitarianism, but you'll find so many bad commitments there.
https://www.noemamag.com/artificial-general-intelligence-is-...
neural networks are solving huge issues left and right. Googles NN based WEathermodel is so good, you can run it on consumer hardware. Alpha fold solved protein folding. LLMs they can talk to you in a 100 languages, grasp tasks concepts and co.
I mean lets talk about what this 'hype' was if we see a clear ceiling appearing and we are 'stuck' with progress but until then, I would keep my judgment for judgmentday.
There is a reason so many scams happen with technology. It is too easy to fool people.
If this progress and focus and resources doesn't lead to AI despite us already seeing a system which was unimaginable 6 years ago, we will never see AGI.
And if you look at Boston Dynamics, Unitree and Generalist's progress on robotics, thats also CRAZY.
I don't know, maybe AGI is possible but there's more to intelligence than statistical next word prediction?
The 'predicting the next word' is the learning mechanism of the LLM which leads to a latent space which can encode higher level concepts.
Basically a LLM 'understands' that much as efficient as it has to be to be able to respond in a reasonable way.
A LLM doesn't predict german text or chinese language. It predicts the concept and than has a language layer outputting tokens.
And its not just LLMs which are progressing fast, voice synt and voice understanding jumped significantly, motion detection, skeletion movement, virtual world generation (see nvidias way of generating virutal worlds for their car training), protein folding etc.
Yes, and unless you are prepared to rebut the argument with evidence of the supernatural, that's all there is, period. That's all we are.
So tired of the thought-terminating "stochastic parrot" argument.
It's not supernatural, I believe that an artificial intelligence is possible because I believe human intelligence is just a clever arrangement of matter performing computation, but I would never be presumptuous enough to claim to know exactly how that mechanism works.
My opinion is that human intelligence might be what's essentially a fancy next token predictor, or it might work in some completely different way, I don't know. Your claim is that human intelligence is a next token predictor. It seems like the burden on proof is on you.
Crypto was flawed from the beginning and lots of people didn't understood it properly. Not even that a blockchain can't secure a transaction from something outside of a blockchain.
Just got an email from GitHub saying they'll be raising prices for Co Pilot.
"To keep up with the way you use Copilot, we're transitioning to usage-based billing, and we want to give you enough time to prepare."
Man, it was fun. Having my tokens subsidized by Microsoft. If the prices go up to much I guess I'll try Deepseek again.
Also, Opus 4.7 seems like a model more intended to save Anthropic money than push the bar.
I think the biggest winner of this might be Google. Virtually all the frontier AI labs use TPU. The only one that doesn't use TPU is OpenAI due to the exclusive deal with Microsoft. Given the newly launched Gen 8 TPU this month, it's likely OpenAI will contemplate using TPU too.
https://www.reuters.com/business/retail-consumer/openai-taps...
There’s no upper limit to their financial stupidity.
valued at --which I'd say is a reasonable distinction to make right about now
How?
I feel this looks like a nice thing to have given they remain the primary cloud provider. If Azure improves it's overall quality then I don't see why this ends up as a money printing press as long as OpenAI brings good models?
[1] https://www.wsj.com/tech/ai/openai-and-microsoft-tensions-ar...
And on top of that, OpenAI still has to pay Microsoft a share of their revenue made on AWS/Google/anywhere until 2030?
And Microsoft owns 27% of OpenAI, period?
That's a damn good deal for Microsoft. Likely the investment that will keep Microsoft's stock relevant for years.
I doubt it
They still run their own platform.
https://thenewstack.io/github-will-prioritize-migrating-to-a...
What was I looking at?
But if I own 49% of a company and that company has more hype than product, hasn't found its market yet but is valued at trillions?
I'm going to sell percentages of that to build my war chest for things that actually hit my bottom line.
The "moonshot" has for all intents and purposes been achieved based on the valuation, and at that valuation: OpenAI has to completely crush all competition... basically just to meet its current valuations.
It would be a really fiscally irresponsible move not to hedge your bets.
Not that it matters but we did something similar with the donated bitcoin on my project. When bitcoin hit a "new record high" we sold half. Then held the remainder until it hit a "new record high" again.
Sure, we could have 'maxxed profit!'; but ultimately it did its job, it was an effective donation/investment that had reasonably maximal returns.
(that said, I do not believe in crypto as an investment opportunity, it's merely the hand I was dealt by it being donated).
And Microsoft only paid $10B for that stake for the most recognizable name brand for AI around the world. They don't need to "hedge their bets" it's already a humongous win.
Why let Altman continue to call the shots and decrease Microsoft's ownership stake and ability to dictate how OpenAI helps Microsoft and not the other way around?
That's a flawed argument. Why wouldn't you want to hedge a risky bet, and one that's even quite highly correlated to Microsoft's own industry sector?
my impression is that many of these "investments" are structured IOUs for circular deals based on compute resources in exchange for LLM usage
Genuine question because I feel like I’m maybe missing something!
The longer answer is; you never know whats coming next, bitcoin could have doubled the day after, and doubled the day after that, and so on, for weeks. And by selling half you've effectively sacrificed huge sums of money.
The truth is that by retaining half you have minimised potential losses and sacrificed potential gains, you've chosen a middle position which is more stable.
So, if bitcoin 1000 bitcoing which was word $5 one day, and $7 the next, but suddenly it hits $30. Well, we'd sell half.
If the day after it hit $60, then our 500 remaining bitcoins is worth the same as what we sold, so in theory all we lost was potential gains, we didn't lose any actual value.
Of course, we wouldn't sell we'd hold, and it would probably fall down to $15 or something instead.. then the cycle begins again..
The Microsoft and OpenAI situation just got messy.
We had to rewrite the contract because the old one wasn't working for anyone. Basically, we’re trying to make it look like we’re still friends while we both start seeing other people. Here is what’s actually happening:
1. Microsoft is still the main guy, but if they can't keep up with the tech, OpenAI is moving out. OpenAI can now sell their stuff on any cloud provider they want.
2. Microsoft keeps the keys to the tech until 2032, but they don't have the exclusive rights anymore.
3. Microsoft is done giving OpenAI a cut of their sales.
4. OpenAI still has to pay Microsoft back until 2030, but we put a ceiling on it so they don't go totally broke.
5. Microsoft is still just a big shareholder hoping the stock goes up.
We’re calling this "simplifying," but really we’re just trying to build massive power plants and chips without killing each other yet. We’re still stuck together for now.
"The Microsoft and OpenAI situation just got messy" is objectively wrong–it has been messy for months [1]. Nos. 1 through 3 are fine, though "if they can't keep up with the tech, OpenAI is moving out" parrots OpenAI's party line. No. 4 doesn't make sense–it starts out with "we" referring to OpenAI in the first person but ends by referring to them in the third person "they." No. 5 is reductive when phrased with "just."
It would seem the translator took corporate PR speak and translated it into something between the LinkedIn and short-form blogger dialects.
[1] https://www.wsj.com/tech/ai/openai-and-microsoft-tensions-ar...
I don't expect the translation to take OpenAI's statements and make them truthful or to investigate their veracity, but I genuinely could not understand OpenAI's press release as they have worded it. The translation at least makes it easier to understand what OpenAI's view of the situation is.
"We" in this sentence refers to both parties; "they" refers to OpenAI. Not a grammatical error.
Fair enough.
> "they" refers to OpenAI. Not a grammatical error
I'd say it is. It's a press release from OpenAI. The rest of the release uses the third-person "they" to refer to Microsoft. The LLM traded accuracy for a bad joke, which is someting I associate with LinkedIn speak.
The fundmaental problem might be the OpenAI press release is vague. (And changing. It's changed at least once since I first commented.)
The circular economy section really is shocking- OpenAI committing to buying $250 Billion of Azure services, while MSFT's stake is clarified as $132 Billion in OpenAI. Same circular nonsense as NVIDIA and OpenAI passing the same hundred billion back and forth.
Mac: You're damn right. Thus creating the self-sustaining economy we've been looking for.
Dennis: That's right.
Mac: How much fresh cash did we make?
Dennis: Fresh cash! Uh, well, zero. Zero if you're talking about U.S. currency. People didn't really seem interested in spending any of that.
Mac: That's okay. So, uh, when they run out of the booze, they'll come back in and they'll have to buy more Paddy's Dollars. Keepin' it moving.
Dennis: Right. That is assuming, of course, that they will come back here and drink.
Mac: They will! They will because we'll re-distribute these to the Shanties. Thus ensuring them coming back in, keeping the money moving.
Dennis: Well, no, but if we just re-distribute these, people will continue to drink for free.
Mac: Okay...
Dennis: How does this work, Mac?
Mac: The money keeps moving in a circle.
Dennis: But we don't have any money. All we have is this. ... How does this work, dude!?
Mac: I don't know. I thought you knew.
Might really increase the utility of those GCP credits.
Microsoft Corp. will no longer pay revenue to OpenAI and said its partnership with the leading artificial intelligence firm will not be exclusive going forward.
What does this mean that Microsoft will no longer pay revenue to OpenAI? How did the original deal work?That might help fix some of the bugs in Teams... :)
Bear in mind that MSFT have rights to OpenAI IP (as well as owning ~30% of them). The only reason they were giving revenue share was in return for exclusivity.
If they wanted named exclusivity rather than general exclusivity, we would charge a somewhat smaller amount for each competitor they wanted exclusivity from. They could give up exclusivity at any time.
That was precisely how we structured our deal with Azure, back in 2014-2016 or so.
Tried to delete this submission in place of it but too late.
I think this is good for OpenAI. They're no longer stuck with just Microsoft. It was an advantage that Anthropic can work with anyone they like but OpenAI couldn't.
https://blogs.microsoft.com/blog/2025/11/18/microsoft-nvidia...
https://azure.microsoft.com/en-us/blog/deepseek-r1-is-now-av...
AFAICT they are just hedging their bets left and right still. Also feels like they are winning in the sense that despite pretty much all those products being roughly equivalent... they are still running on their cloud, Azure. So even though they seem unable to capture IP anymore, they are still managing to get paid for managing the infrastructure.
Azure is effectively OpenAI's personal compute cluster at this scale.
That article doesn't give a timeframe, but most of these use 10 years as a placeholder. I would also imagine it's not a requirement for them to spend it evenly over the 10 years, so could be back-loaded.
OpenAI is a large customer, but this is not making Azure their personal cluster.
[1] https://news.microsoft.com/source/2026/04/08/microsoft-annou...
https://blogs.microsoft.com/blog/2026/04/27/the-next-phase-o...
OpenAI has public models that are pretty 'meh', better than Grok and China, but worse than Google and Anthropic. They still cost a ton to run because OpenAI offers them for free/at a loss.
However, these people are giving away their data, and Microsoft knows that data is going to be worthwhile. They just dont want to pay for the electricity for it.
This seems impossible.
Yes. Microsoft was "considering legal action against its partner OpenAI and Amazon over a $50 billion deal that could violate its exclusive cloud agreement with the ChatGPT maker" [1].
[1] https://www.reuters.com/technology/microsoft-weighs-legal-ac...
They did not need to go so hard on the hype - Anthropic hasn’t in relative terms and is generating pretty comparable revenues at present.
OpenAI bet on consumers; Anthropic on enterprise. That will necessitate a louder marketing strategy for the former.
Why is it Altman is facing kill shots and Dario isn’t?
Altman peaked in the zeiteist in 2023; Dario, much less prominently, in 2024 and now '26 [1]. I'd guess around this time next year, Dario will be as hated as Altman is today.
[1] https://trends.google.com/explore?q=altman%2C%20Dario&date=t...
I fear for the end user we'll still see more open-microslop spam. I see that daily on youtube - tons of AI generated fakes, in particular with that addictive swipe-down design (ok ok, youtube is Google but Google is also big on the AI slop train).
I imagine the thinking was that it’s better to just post it clearly than to have rumors and leaks and speculations that could hurt both companies (“should I risk using GCP for OpenAI models when it’s obviously against the MS / OpenAI agreement?”).