What am I missing? I'm genuinely curious.
Also, the largest theft in human history surely has to be the East India Company extracting something like 50 trillion from India over 200 years, right?
I never understood these sorts of statements. I feel historical events maybe after the Victorian age can claim to be theft, otherwise it's just empires and conquest.
Adjusted for inflation, wouldn't Alexander the Great's plundering of Persia, which at the time comprised 40% of the world's population, be the greatest theft in human history, using your logic?
"empires and conquest" is literally armed robbery.
One criterion that might work is whether there's some greater power around that says it's theft, and is able/willing to enforce that in some manner.
So for example a successful conquest isn't theft, but a failed conquest is probably attempted theft (and vandalism of course).
Sure, it's divided up amongst all the descendants now, but it was quite a heist.
Divide total GDP by the population and turn it into one unit.
Ug's best smashing rock would be 1.
It was always theft. Having been done in the past does not make them less theft. The reason East India Company is shown as example for such things is that it is the first human organization that did those on an industrial scale and genocidally.
https://yourstory.com/2014/08/bengal-famine-genocide
It was already starving Indians by forcing them to plant opium instead of food crops to sell to the Chinese to kill them for money (20 million/year estimated dead from opium) in the late 18th century. And when the Chinese finally tried to stop it, Opium wars happened. The justification shown for that war was 'Free trade'. The justifications still havent changed, neither the practices. This should tell you why East India Company is specifically evil, because it is the first large scale application of the evil you see today and it invented a lot of its methods.
To the extent the answer is ‘much lower’ then he could have spent a whole blog post congratulating California ag and Sam for landing the single largest new public charity in real dollar terms maybe ever.
If the point is “it sticks in my craw that the team won’t keep working how they used to plan on working even when the team has already left” then, fair enough. But I disagree with theft as an angle; there are too many counter factuals to think through before you should make a strong case it’s theft.
Put another way - I think the writer hates Sam and so we get this. I’m guessing we will not be reading an article where Ilya leaving and starting a C corp with no charitable component is called theft.
> It seems that Altman has a lot of detractors here, and I'm not sure why
Why are you confused/surprised that Altman has detractors?
But, you're right, that's no reason to refrain from criticizing them for it.
Plus, why do people think OAI is still special? Facebook, Google, and many smaller companies are doing the exact same work developing models.
Imagine if an executive was running the world’s largest charity for cancer research, which was chartered to make sure a cure remained in the public trust and raised millions with that promise.
But then once they discovered a cure for cancer the executive instead decided to transfer that cure to a ruthlessly competitive company they personally owned a large percentage of and then become a billionaire many times over.
Sam doesn’t do anything for free, even though he is already a billionaire 2-3 times over.
The property of a charity is being pillaged for the benefit of private parties, like Microsoft, existing employees, and yes of course Altman himself via various means.
You can “well actually” this all day, but at the beginning of the story there’s a charity with millions of dollars to do research and the promise to keep the resulting advancements in the public trust.
At the end of the story there will be billions of dollars in the hands of private individuals and the IP the charity created in the hands of a ruthless for profit company.
I am unable to find any concrete claim of specific tax avoidance. Only these exasperated “but taxes” comments.
All the sources I can find say that the revenue of ChatGPT was through the for-profit division, and that they’ve been paying taxes on all their revenue.
Is there some other tax that they’ve avoided paying?
Everything of their restructuring was signed off on by multiple states’ attorneys general. And their for-profit entity pays taxes like any other company.
Making them pay tax on stuff they did while a non-profit is making up laws on the fly - a strong, rule-of-law-based system is critical for the US to function properly.
You can’t just arbitrarily make decisions based on what you think should happen because it’s fair or unfair.
If you want OpenAI to pay back taxes, you need to change the laws first.
It's not about changing the laws, it's about enforcing the ones we have fairly. Too many orgs and companies buy politicians, and now ballrooms for them
https://www.fplglaw.com/insights/california-nonprofit-law-es...
- AGI being cheap to develop, or
- finding funders willing to risk billions for capped returns.
Neither happened. And I'm not sure the public would invest 100's of billions on the promise of AGI. I'm glad there are investors willing to take that chance. We all benefit either way if it is achieved.
I am not sure that making labour obsolete, and putting the replacement in the hands of a handful of investor will result in everybody benefiting.
I believe that AGI will be a net benefit to whomever controls it.
I would argue that if a profit driven company rents something valuable out to others, you should expect it would benefit them just as much if not more, than those paying for that privilege. Rented things may be useful, but they certainly are not a net benefit to the system as a whole.
Information interconnection is meaningfully different from AGI, and the environment ATT and Bell existed within no longer exist.
Drop those assumptions and my point stands that throughout history, monopolistically-controlled transformative technologies (telephones, electricity, vaccines, railroads) have still delivered net benefits to society, even if imperfectly distributed. This is just historical fact.
Yeah, like I said, room for improvement. I find the argument that AGI or sAGI should be feared, or is likely to turn "evil" absurd in the best case. So your arguing against a strawman I already find stupid.
Telephones, increased the speed of information transfer, it couldn't produce on it's own. Electricity allowed transmission of energy from one place to another, and doesn't produce inherent value in isolation, vaccines are in an entirely different class of advancement, (so I have to idea how you mean to apply it to the expected benefits of AGI? I assume you believe AGI will have something to do with reducing disability), railroads again, like energy or telephones, involved moving something of value from one place to another.
AGI is supposed to produce a potentially limitless amount of inherent value on its own, right? It will do more than just move around components of value, but more like a diamond mine, it will output something valuable as a commodity. Something that can easily be controlled... oh but it's also not concrete, you can never have your own, it's only available for rental, and you have to agree to the ToS. That sounds just like all previous inventions, right?
You're welcome to cite any historical facts you like, but when you're unwilling or unable to draw concrete parallels, or form convincing conclusions yourself, and hand wave, well most impressivive inventions in the past were good so I feel AGI will be cool too!
Also, the critical difference (ignoring the environmental differences between then and now) between the inventions you cited, and AGI, is the difficulty in replicating any technology. Other than "it happened before to most technologies" is there reason I should believe that AGI would be easy to replicate for any company that wants to compete against the people actively working to increase the size of their moat? copper wire, and train tracks are easy to install. Do you expect AGI will be easy for everyone to train?
EDIT: I'm not sure why I'm being downvoted. I read the article and it's not clear to me. The entire article is written with the assumption that the reader knows what the author is thinking.
Also, the article is very clear - the wealth transfer is moving the money/capital controlled by a non-profit to stockholders of a for-profit company. The non-profit lost that property, the share holders gained that property. It seems like taking an implicit assumption something like "the same people are running the for-profit on the same basis they ran the non-profit so where's the theft" - feel free to make that argument but mix the claim with "I don't understand" doesn't seem like a fair approach.
I am also a somewhat harsh critic of Sam Altman (mostly around theft of IP used to train models, and around his odd obsession with gathering biometrics of people). So I'm honestly looking for answers here to understand, again, what wrongdoing is being done?
So the "theft" is the wealthy stealing the benefits of AGI from the people. I think.
Edit: downvoting why? Sama fanboys? Tell me your book rec then.
This situation is arguably better than an alternative where Google or another big tech monopoly had also monopolized LLMs (which seems like the most likely winner otherwise, however they may have also never voluntarily ventured in to publicly releasing LLM tools because of the copyright issues and risk of cannibalizing their existing ad business.) Feels like this story isn't finished and writing a book is premature.
> or when Altman said that if OpenAI succeeded at building AGI, it might “capture the light cone of all future value in the universe.” That, he said, “is for sure not okay for one group of investors to have.”
He really is the king of exaggeration.
If i understood correctly the author does admit that continuing openai as a nonprofit is unrealistic, and the current balance of power could be much worse, but what disgusts me is the dishonest messaging they started off with.
Lookup Worldcoin for instance.
- Multimodality (browser use, video): To compete here, they need to take on Google, which owns the two biggest platforms and can easily integrate AI into them (Chrome and YouTube).
- Pricing: Chinese companies are catching up fast. It feels like a new Chinese AI company appears every day, slowly creeping up the SOTA benchmarks (and now they have multimodality, too).
- Coding and productivity tools: Anthropic is now king, with both the most popular coding tool and model for coding.
- Social: Meta is a behemoth here, but it's surprising how far they've fallen (where is Llama at?). This is OpenAI's most likely path to success with Sora, but history tells us AI content trends tend to fade quickly (remember the "AI Presidents" wave?).
OpenAI knows that if AGI arrives, it won't be through them. Otherwise, why would they be pushing for an IPO so soon?
It makes sense to cash out while we're still in "the bubble." Big Tech profits are at an all-time high, and there's speculation about a crash late next year.
If they want to cash out, now is the time.
Google on multimodality: has been truly impressive over the last six months and has the deep advantages of Chrome, YouTube, and being the default web indexer, but it's entirely plausible they flub the landing on deep product integration.
Chinese companies and pricing: facts, and it's telling to me that OpenAI seems to have abandoned their rhetorical campaign from earlier this year teasing that "maybe we could charge $20000 a month" https://techcrunch.com/2025/03/05/openai-reportedly-plans-to....
Coding: Anthropic has been impressive but reliability and possible throttling of Claude has users (myself included) looking for alternatives.
Social: I think OpenAI has the biggest opportunity here, as OpenAI is closest to being a consumer oriented company of the model hyperscalers and they have a gigantic user base that they can take to whatever AI-based platform category replaces social. I'm somewhat skeptical that Meta at this point has their finger on the pulse of social users, and I think Superintelligence Labs isn't well designed to capitalize on Meta's advantages in segueing from social to whatever replaces social.
an ipo is a way to seek more capital. they don't think they can achieve agi solely through private investment.
private deals are becoming bigger than public deals recently. so perhaps the IPO market is not a larger source of capital. different untapped capital, maybe, but probably not larger.
The average joe is not using them though, for the general public AI is ChatGpt.
Is there like a public list of all employees who have transitioned or something? As far as I know there have been some high profile departures.
Take image diffusion models. They’re trained on the creative works of thousands and completely eliminates the economic niche for them.