https://www.wheresyoured.at/the-case-against-generative-ai/
OpenAI needs 1 trillion dollars in next four years just to keep existing. That's more than all currently available capital, together, from all private equity. If you take all private equity together, it still won't be enough for OpenAI in next four years.
It's just staggering amount of money that the world needs to bet on OpenAI.
Spread across 4 years, there’s way more than $1tn in available private capital. Berkshire Hathaway alone has 1/3 of that sitting in a cash pile right now. You don’t have to look at many balance sheets to see that the big tech companies are all also sitting on huge piles of cash.
I don’t personally believe OAI can raise that money. But the money exists.
The dude is a living embodiment of "overconfident and wrong". He picks a side, then cherry picks until he has it all lined up so that his side is Obviously Right and anything else is Obviously Wrong.
Then reality happens and he's proven wrong, but he never fucking learns.
If some exist I'd like to go see their output. If few/none exist... are there any conclusions we can draw from the rarity?
More like $115 billion based on your own link. The $1T is a guess based on promises made, far from "needs to spend that much just to exist".
Spending $30 billion a year doesn't sound that apocalyptic.
they have $1000billion in future liabilities. https://www.ft.com/content/5f6f78af-aed9-43a5-8e31-2df7851ce...
they have income of about 4billion a half, with a loss of $13bn.
I have my own reservations about the company, but there's a pretty real path toward huge revenues and profitability for them that seems pretty obvious to me.
They do not need it. Arguably no one needs it. I am at best luke-warm on LLMs, but how are any sane people rationalizing these requests? What is the opportunity cost of spending a billion or even a hundred billion dollars on compute instead of on climate tech or food security or healthcare or literally anything else?!
I easily see the rich betting a trillion dollars especially if it's not their money and they start employing government funds in the name of a fictitious AI arms race.
They smell blood in the water. Reducing everyone to as minimum wage as possible.
Capital is already concentrated, aligned along monopolies and cartels, oligarchical control, and AI is the final key to total control to whatever degree they desire.
A lot of "the rich's" money is actually backed by the pension funds, 401k and similar investment vehicles.
That's the dirty secret in most of today's world. A lot of ultra-large companies would absolutely deserve getting broken up just for being way too powerful, the problem is any such attempts would send the stonk markets and, with them, people's pensions tanking.
But that is a short-term perspective. Long term, if we do nothing, we remain galley slaves.
Not really clear what you mean by this. But arguably all of the rich's money (and all money in general) is backed by labour/the ability to exchange that money for someone else's labour.
The amount of shares in many a large company that are held by passive or semi-passive investment vehicles.
It's not just high net worth individuals and nation-state entities (wealth funds) that pump money into YC, Meta, Apple, NVidia, Microsoft and god knows what else, the bulk of the ownerships is held indirectly by the wide masses via one sort or another of pension schemes.
Elon Musk doesn't play with his own money on xAI, he plays with the money of his investors, and so do all the other actors in the AI bubble.
[0] https://finance.yahoo.com/news/wealthiest-10-americans-own-9...
My knowledge of the insides of Tesla's governance highlights this. The requirement of index funds to invest fixed amounts in in companies allows CEOs to exert more board control and avoid "investor activism" for things like "roman salutes".
The uber rich elites, either directly or through "friendships". And that's my point.
Altman has sort of linked the fate of OpenAI to that of Nvidia, AMD, Oracle, Microsft etc with these huge deals / partnerships. We've seen the impact of these deals on stock prices before even a penny has changed hands.
Tracks with his reputation for power play and politics.
I want windows to play games, for computing I use Linux but they keep foisting shite I don’t want on me just to play games, AI and sodding OneDrive can piss off.
I’ve kept windows around because it was less painful to game on than Linux but Linux is better than ever and windows is getting worse, at some point those lines are gonna cross and for the first time in 30 odd years I’ll not be running a single device with a Microsoft operating system on.
And they deserve it.
Regarding Linux gaming, the biggest problem there right now is all the multiplayer games with kernel anti-cheat. But I suspect that it'll be resolved eventually by Valve pushing for SteamOS support (although I doubt it'll ever work on any random Linux distro).
It depends on the extent to which the promise was peddled and whether MSFT can be trusted with the cash balance - investors will reflect that in the stock price in future if there is a bubble bursting event. If that scenario pans out, Apple will be sitting there very pretty given it has not spent any real money on pursuing LLMs.
Recent estimates I've seen of uninvested private equity capital are ~$1.5 trillion and total private equity $5+ trillion, with several hundred million in new pricate equity funds raised annually, so this simply seems incorrect even assuming either only currrent “dry powder" is considered, or only new funds available over the next four years, much less both and/or the rest of private equity.
That's a mischaracterization: They need that order of investment to meet the demand they forecast.
It's unclear what other trajectories look like.
Additionally, I don't know who Ed Zitron is but he clearly doesn't follow how infrastructure projects are funded and how OpenAI is doing deals.
See for example the AMD deal last week where they seem to have at least partially used their ability to increase AMD's stock price to pay for future GPUs.
Mining companies do the kids of "circular deals" that OpenAI is criticized for all the time - they will take equity in their supplier companies. It's easy to see similar arrangements for this $1T investment in the future.
"In the past, most companies have had processes geared towards office work. Covid-19 has forced these companies to re-gear their processes to handle external workers. Now that the companies have invested in these changed processes, they are finding it easier to outsource work to Brazil or India. Here in New York City, I am seeing an uptick in outsourcing. The work that remains in the USA will likely continue to be office-based because the work that can be done 100% remotely will likely go over seas."
He responded:
"Pee pee poo poo aaaaaaaaaaa peeeeee peeeeee poop poop poop."
I don't know if he was taking drugs or what. I find his persona on Twitter to be baffling.
In my experience he has a horrible response to criticism. He's right on the AI stuff, but he responds to both legitimate and illegitimate feedback without much thoughtfulness, usually non-sequitur redirect or ad hominem.
In his defense though, I expect 97% of feedback he gets is Sam Altman glazers, and he must be tired.
I'm actually more inclined to believe he's wrong if he gets so defensive about criticism. That tells me he's more focused on protecting his ego than actually uncovering the truth.
My point is, you can agree that OpenAI is unsustainable, but it's not clear to me that is a decided fact, rather than an open conjecture. And if someone is making that decision from a place of ego, I have greater reason to believe that they didn't reason themselves into that position.
Seems a little unreasonable to point out “they are still around” as a refutation of the claim they aren’t sustainable when, in fact, the moment the investment money faucet keeping them alive is turned off they collapse and very quickly.
What Zitron points out, correctly, is that there currently exists no narrative beyond wishful thinking which explains how that reversal will manifest.
For example, he was the lone voice saying that despite all the posturing and media manipulation by Altman, that OpenAI's for-profit transformation would not work out, and certainly not by EOY2025. He was also the lone voice saying that "productivity gains from AI" were not clearly attributable to such, and are likely make-believe. He was right on both.
Perhaps you have forgotten these claims, or the claims about OpenAI's revenue from "agents" this year, or that they were going to raise ChatGPT's price to $44 per month. Altman and the world have seemingly memory-holed these claims and moved on to even more fantastical ones.
He has never said that OpenAI would be bankrupt, his position (https://www.wheresyoured.at/to-serve-altman/, Jul 2024) is:
I am hypothesizing that for OpenAI to survive for longer than two years, it will have to (in no particular order):
- Successfully navigate a convoluted and onerous relationship with Microsoft, one that exists both as a lifeline and a direct source of competition.
- Raise more money than any startup has ever raised in history, and continue to do so at a pace totally unseen in the history of financing.
- Have a significant technological breakthrough such that it reduces the costs of building and operating GPT — or whatever model that succeeds it — by a factor of thousands of percent.
- Have such a significant technological breakthrough that GPT is able to take on entirely unseen new use cases, ones that are not currently possible or hypothesized as possible by any artificial intelligence researchers.
- Have these use cases be ones that are capable of both creating new jobs and entirely automating existing ones in such a way that it will validate the massive capital expenditures and infrastructural investment necessary to continue.
I ultimately believe that OpenAI in its current form is untenable. There is no path to profitability, the burn rate is too high, and generative AI as a technology requires too much energy for the power grid to sustain it, and training these models is equally untenable, both as a result of ongoing legal issues (as a result of theft) and the amount of training data necessary to develop them.
He is right about this too. They are doing #2 on this list.
That puts him roughly on-par with everyone who isn't Gerganov or Karpathy.
For me 10 billion, 100 billion and 1 trillion are all very abstract numbers - until you show much unreal 1 trillion is.
Attach your name to this publicly, and you're a clown. I don't know why the world started listening to clowns and taking them seriously, when their personas are crafted to be non-serious on purpose.
Like I said, clowns.
What’s the joke? “What do you call the person who graduated last in their class from med school? A doctor.”
Certainly the most skilled and advanced in the medical field will need significant schooling but there needs to be a major reform in healthcare training. One that produces more knowledgeable and skilled professionals and not a glut of questionably competent nurse practitioners.
Doesn't America alone already spend 2 or 3 trillion a year on healthcare?
There's a huge difference between "paying for healthcare" and "paying a healthcare provider" here in the United States. Oftentimes the latter has 2 or 3 additional zeroes attached.
In 2023-4, Health came #7 in total political donations, #8 is Lawyers & Lobbyists; the combined "Finance/Insur/RealEst" is #1; would be useful to see "Insurance" broken out by health insurers vs non-health (can anyone cite a more granular breakdown?). [https://www.opensecrets.org/elections-overview/sectors]
For whatever reasons, the consensus in the US after decades of talking comes down to single payer vs privatized insurance. Congress isn't going to implement single-payer, so the menu reduces to either we choose good or bad regulation of privatized insurance.
We don't have time for yet another decade of debate, since health insurance premiums (net, post-tax-credit) in the US are about to jump this November open enrollment by median 18% overall, or 114% for people on ACA due to the expiration of enhanced premium tax credits [0]. Expect that to feature prominently in the news cycle by Thanksgiving.
(Germany's multi-payer system (government + mandatory statutory contributory insurance + optional private insurance) would in theory be fine if US Congress was ever incentivized to implement such a thing. But it very clearly isn't, since the 1950s - look at the lobbying money trails. Let the good not be the enemy of the perfect. The ACA was the closest the US (briefly) came to mandatory statutory contributory insurance, but the federal mandate was abolished back in calendar 2019 by the "Tax Cuts and Jobs Act of 2017").
[0]: https://www.kff.org/affordable-care-act/aca-marketplace-prem...
That is twice as much per capita as our "peer" nations (UK, France, Canada, Germany, etc) and we have poorer outcomes.
Our cost per service is 2-4x or more, and the larger reliance on specialists creates significant complexity and even more costs. So, we do spend 2x, but we get 1/3 to 1/4 of "care" per dollar. In other words, we get less actual care. And the care is biased to fixing things as opposed to preventing things. And it is also biased to those who are wealthier.
Some of the cost drivers: - Administration is 25% of costs, far less in other countries. Insurance company profits and complex administration with confusing and overlapping methodologies that obfuscate costs and comparisons.
- Capital costs are 25% of costs, far less in other countries. Multiple, private, and overlapping hospitals demand more capital and private capital with its expected returns
- Doctor compensation is 2x to 4x more, nursers 2x. Specialists here get truly rich, not true in other countries.
So, quite a lot of the extra spend is not efficient, and goes to insurers, owners of hospitals, and doctors.
I also have personal experience. To get a simple ultrasound, you are talking about $450 for a primary care visit to get a referral for a $650 specialist to get a $1000 ultrasound ($800 scan plus $200 reading), to get a $650 follow-up visit with the specialist to discuss the results. That is almost $3,000 of actual out of pocket costs to me, with a good insurance plan ($2K per month for a couple), the "claimed" costs were significantly higher. MRI and CT are even higher. Similar for a broken ankle, which cost me over $4000 out of pocket.
I am, relatively speaking, well off compared to average, and was able to do this, but that hurt, and significantly disincentivizes me in the future.
Our health system is broken, and pumping more money into only makes it worse.
Not that I think we should, but the "Cash is just paper" attitude makes no sense. If it's just paper, how is OpenAI training AI using just paper?
Yet another hidden benefit of a human workforce, that AI can't match.
This is obviously not AGI, and we're very far from AGI as we can see by trying out these LLMs on things like stories or on analyzing text, or dealing with opposing arguments, but for programming and maths performance is at a level where it's actually useful.
Theres probably a good 2-3 years left of runway for substantial progress before things really fall off a cliff. There has to be some semblance of GDP (real) growth.
But people probably expect to get the next version for what they pay in subscriptions now, so I can't imagine much more revenue growth for the model companies.
At an enterprise level however, in the current workload I am dealing with, I can't get GPT-5 with high thinking to yield acceptable results; Gemini 2.5 Pro is crushing it in my tests.
Things are changing fast, and OpenAI seems to be the most dynamic player in terms of productization, but I'm still failing to see the moat.
Like Apple vs every other computer maker.
Their platform mostly just works. Their api playground and docs are like 100x better than whatever garbage Anthropic has.
I think their UX is way better, and I can have long AF conversations without lagging. I can even change models in the same conversation. Basic shit Anthropic can’t figure out (they can fleece their 20x max subscribers tho)
I think if they get the AI human interface right, they will have their iPhone moment and 10x.
The only moat they have is the fact that you still need a GPU of some kind to reliably run even a tiny LLM. But the gap between what absolutely needs a server farm and what can be ran on a store bought gaming computer is quickly closing. You can already run mixture of experts models on gaming rigs with a high degree of usability compared to just one year ago. And that tech continues to be pushed further and further. It's only a matter of time until we see ChatGPT levels of access running on a quad core laptop totally offline. And once that happens, all such a system would need is the correct tooling on top of the AI model "brain" to make it usable.
And beyond that, what if you could have an AI model on an ASIC-style add-in card? Where's their moat then?
The token cache works on CUDA too. So yeah, the initial loadout sucks, but almost everything from then on is solid.
I mean you could say the same about a Macbook or iPhone/iPad, but for the actual people (not HN users lol) out there, they vastly prefer Apple to HP, Dell, etc. Due to their wallet some can't though.
There are literally thousands of other laptops that do the "same thing" (computer for doing shit).
Those who say otherwise are usually broke and know deep down that given proper purchasing power, they too would prefer to use a $3k Macbook Pro than some POS Dell.
Same with android.
Everyone knows it is cope based on PP, it is just in poor taste to actually call it like it is (idc)
Also Claude Opus 4.1 runs multidimensional circles around GPT-5 in my view. The only better use case for GPT-5 is when you need it to scrape the web for data
https://github.com/anthropics/claude-code/issues/8449
I hope they get destroyed in the AI race.
Makes one wonder if Google will eventually sweep this field.
So guess which app I prefer. That's the argument for Electron, and it is a good one.
Moving from ChatGPT to Claude I would lose a lot of this valuable history.
Why use a car when you can just walk?
In the EU/UK you might not have rights to the memories right now, but you've rights to the inputs that created those memories in the first place.
Wouldn't be too hard to export your chat history into a different AI automatically.
No moat.
I think ChatGPT might turn into just “chat” as the next evolution of the term.
https://www.reddit.com/r/Bard/comments/1mkj4zi/chatgpt_pro_g...
I question this. Each vendor's offering has its own peculiar prompt quirks, does it not? Theoretically, switching RDBMS vendors (say Ora to Ingress) was also "an afernoon's work" but it never happened. The minutia is sticky with these sort of almost-but-not 'universal' interfaces.
The bigger problem is that there was never a way to move data between oracle->postgres in pure data form (i.e. point pgsql at your oracle folder and it "just works"). Migration is always a pain, and thus there is a substantial degree of stickiness, due to the cost of moving databases both in terms of risk and effort.
In contrast, vendors [1] are literally offering third party LLMS (such as claude) in addition to their own and offering one-click switching. This means users can try and if they desire switch with little friction.
[1] https://blog.jetbrains.com/ai/2025/09/introducing-claude-age...
All one needs to do is say something like “tell me all of personalization factors you have on me” and then just copy and paste that into the next LLM with “here’s stuff you should know about how to personalize output for me”
True, but that's not really applicable here since LLMs themselves are not stable, and are certainly not stable within a vendors own product line. Like imagine if every time Oracle shipped a new version it was significantly behaviorally inconsistent with the previous one. Upgrading within a vendor and switching vendors ends up being the same task. So you quickly solidify on either
1) never upgrading, although with these being cloud services that's not necessarily feasible, and since LLMs are far from a local maxima in quality that'd quickly leave your stack obsolete
or
2) being forced to be robust, which makes it easy to migrate to other vendors
It turns out that they can use the same prompt system for all of them, with no changes and still solve 5/6 IMO problems. I think this is possibly iffy, since people might have updated the models etc., but it's pretty obvious that this kind of thing is how OpenAI are doing their multi-stage thinking thing for maths internally.
Consequently if prompt systems are this transferable for these hard problems, why wouldn't both they and individual prompts, be highly transferable in general?
Changing the LLM backend in some IDE is as complicated as selecting an option in a dropbox for those who integrate such a feature. They are other scenarios where it might be a bit more complicated to transition of course, but that's it.
The vendors have all standardised on OpenAIs API surface - you can use OpenAIs SDK with a number of providers - so switching is very easy. There are also quite a few services that offer this as a service.
The real test is does a different LLM work - hence the need to evals to check.
That applies even when you switch models within a vendor, though.
The competitors have not come even close to Google's level of quality.
With LLMs, it's different. Gemini/Claude are as good, for the most part. And users don't care that much either - most use the standard free ChatGPT, which likely is worse than many competitors' paid models.
Google Search would be moatless if not for the AdMob purchase.
Meanwhile bing search is actually a platform, and is what then powers other "search engines" (duckduckgo, kagi, etc...)
As Peter Thiel says: “competition is for losers”
OpenAI will either use customer data to enshittify to a level never seen before, or they will go insolvent.
OpenAI turned that research into a product before Google, which is a huge failure on Google's part, but that's orthogonal to the invention of what powers modern models.
Maybe some are too young to remember the great migrations from/to MySpace, MSN, ICQ, AIM, Skype, local alternatives like StudiVZ, ..., where people used to keep in contact with friends. Facebook was just the latest and largest platform where people kept in touch and expressed themselves in some way. People adding each other on Facebook before others to keep in touch hasn't been a thing for 5 years. It's Instagram, Discord, and WhatsApp nowadays depending on your social circle (two of which Meta wisely bought because they saw the writing on the wall).
If I open Facebook nowadays, then out of ~130 people I used to keep in touch with through that platform, pretty much nobody is still doing anything on there. The only sign of life will be some people showing as online because they've the facebook app installed to use direct messaging.
No, people easily migrate between these platforms. All it takes is put your new handles (discord ID/phone number/etc) as a sticky so people know where to find you. And especially children will always use whichever platform their parents don't.
Small caveat: This is a German perspective. I don't doubt there's some countries where Facebook is still doing well.
No? It's rare for these platforms to survive, the one that was closest to challenging Facebook was kneecapped by the US government.
The time between the founding of MySpace to Facebook was a little over a year. Instagram has been the largest social network for close to decade now, and it's not like others haven't been trying. META is up 600% over last 3 years. I'm really question your definition of the word "dying"
When you realize this, you realize that a lot of other supposedly valuable tech companies operate in the exact same way. Worrying that our parents' retirement depends heavily on their valuations!
Maybe you should short the stock to hedge your parents' retirement :)
In the last few interviews with him I have listened to he has said that what he wants is "your ai" that knows you, everywhere that you are. So his game is "Switching Costs" based on your own data. So he's making a device, etc etc.
Switching costs are a terrific moat in many circumstances and requires a 10x product (or whatever) to get you to cross over. Claude Code was easily a 5x product for me, but I do think GPT5 is doing a better job on just "remembering personal details" and it's compelling.
I do not think that apps inside chatgpt matters to me at all and I think it will go the way of all the other "super app" ambitions openai has.
Today I asked GPT5 to extract a transcript of all my messages in the conversation and it hallucinated messages from a previous conversation, maybe leaked through the memory system. It cannot tell the difference. Indiscriminate learning and use of memory system is a risk.
And most people actually don't care what CPU they have in their laptop (enthusiasts still do which i think continues to match the analogy), they care more about the OS (chatGPT app vs gemini etc).
Sorry, but you have to be beyond thick to believe any of this.
can the world and tech survive fruitfully without AI? yes. can the world and tech survive without electricity and transistors - not really. the modern world would come crashing down if transistors and electricity disappeared overnight. if AI disappeared over night the world might just be a better place.
The value is not in the llm but vertical integration and providing value. OpenAI has identified this and is doing is vertical integration in a hurry. If the revenue sustains it will be because of that. For consumer space again, nvidia is better positioned with their chips and SoCs but OpenAI is not a sure thing yet. By that I don’t mean they are going to fall apart, they will continue to make a large amount fmloney but whether it’s their world or not is still up in the air.
The irony being that LLMs are particularly good at writing the web frontend code, lowering the technical barrier to entry for competitors.
It only takes labs to produce better and better models, and the race to bottom on token costs.
The moat is the branding, for most people AI means ChatGpt.
(You can say default in various browsers and a phone OS and that's probably the main component but it's not clear changing that default would let Bing win or etc.)
- Open source LLM models at most 12 months behind ChatGPT/Gemini; - Gemini from Google is just as good, also much cheaper. For both Google and the users, as they make their own TPU; - Coding. OpenAI has nothing like Sonnet 4.5
They look like they invested billions to do research for competitors, which have already taken most of their lunch.
Now with the Sora 2 App, they are just burning more and more cash, so people watch those generated videos in Tiktok and Youtube.
I find it hilarious all the big talk. I hope I get proven wrong, but they seem to be getting wrecked by competitors.
1. Corporate strategy of OpenAI is becoming a monopole 2. OpenAI is investing in infrastructure because they think they'll have lots of users in the future 3. Making videos on Sora is fun, and people are gonna post more of these.
How does that substantiate "we live in OpenAI's world"? Am I missing something?
E.g. here's a forecast of 2021 to 2026 from 2021, over a year before ChatGPT was released. It hits a lot of the product beats we've come to see as we move into late 2025.
https://www.lesswrong.com/posts/6Xgy6CAf2jqHhynHL/what-2026-...
(The author of this is one of the authors of AI 2027: https://ai-2027.com/)
Or e.g. AI agents (this is a doc from about six months before ChatGPT was released: https://www.lesswrong.com/posts/kpPnReyBC54KESiSn/optimality...)
That was 2017. And of course Google & UofT were working on it for many years before the paper was published.
Edit: mixed up my dates claiming DALL E came out before GPT 3
I think it's also worth pointing out that the polish on these products was not actually there on day one. I remember the first week or so after ChatGPT's initial launch being full of stories and screenshots of people fairly easily getting around some of the intended limitations with silly methods like asking it to write a play where the dialogue has the topic it refused to talk about directly or asking it to give examples about what types of things it's not allowed to say in response to certain questions. My point isn't that there wasn't a lot of technical knowledge that went into the initial launch, but that it's a bit of an oversimplification to view things at a binary where people didn't know how to do it before, but then they did.
Deep learning has now been around for a long time. Running these models is well understood.
obviously running them at scale for multiple users is more difficult.
The actual front ends are not complicated - as is evidenced by the number of open source equivalents.
I did think his GPT-5 commentary was good, insofar as picking up the nuance of why it's actually better than the immediate reactions I, at least, saw in the headlines.
Where I do agree with you is how Stratechery's getting a little oversaturated. I'm happy Ben Thompson is building a mini media empire, but I might have liked it more when it was just a simple newsletter that I got in my inbox, rather than pods, YouTube videos, and expanding to include other tech/news doyens. Maybe I'm just a tech media hipster lol.
They do look like trying to grab the market with tooling but if you can use their tools (oss) and switch the models then where is the moat?
Ah yes, the ChromeOS strategy. How'd that work out for Google?
Building a platform is good, a way to make quite a bit of money. It's worked really well for Google and Apple on phones (as Ben notes). But there's a reason it didn't happen for Google on PCs. Find it hard to believe it will for OpenAI. They don't (and can not) control the underlying hardware.
(We were joking about it just last week because my partner asked my eldest what was the Power Point he was working on and he said, "Whats Power Point?")
Many? Most? Possibly, but absolutely not every single one.
They’re the only AI lab with their own silicon.
Edit: they didn’t say “likely,” they just marveled at the talent + data + TPU + global data centers + not needing external investment.
If I recall correctly, their theory was that google could deliver tokens cheaper than anyone else.
Pre-AI :: Examples :: Timeline
---------------------------------------------------------------------
NHI Non-human Intelligence :: e.g. dolphins, apes, crows etc. :: millions of years
HI Human Intelligence :: e.g. Einstein, Trump, Confuscius, Homer :: thousands of years
Post-AI
---------------------------------------------------------------------
AI Artificial Intelligence :: ChatGTP, Gemini, many others :: countable months
AGI :: Artificial General Intelligence ; not there yet; :: zero
AIApHI AI Assisted/Approved/Audited Human Intelligence :: See AI :: countable months
HIApAI Human Intelligence Assisted/Approved/Audited AI :: The Future? :: zero
I have mentioned no individuals here to avoid legal action. My point on AI is ... wait and see. Chill.
There's so much of what, "AI" is becoming that just seems like a massive psy-op to breathe one last breath of life into what is the skeleton of the old Silicon Valley. Innovation is possible but if the future really is liberal authoritarianism/oligarchy there's no room in the contrived market for, "innovative products that greatly improve human life."
There's hope in: https://worrydream.com/
- Sneaking in how someone went from a Sora skeptic to a purported creator within a week.
- Calling the result the "future of creation".
- Titling the advertisement "It’s OpenAI’s World, We’re Just Living in It".
What they are doing here is to pitch Sora to attention deficit teenagers in order to have yet another way to make the hair of the favorite content creator red. As if that didn't already exist.
OpenAI is a geopolitically important play besides being a tech startup so it gets pumped in funding and in PR, to show that we're still leading the world. But that premise is largely hallucinated.
A fair chunk of the tech who’s-who seem to find his thinking useful.
[0] https://www.vox.com/2017/10/16/16480782/substack-subscriptio...
There's nothing inherently wrong with comments referring to him with by his first name, but I don't think I've ever seen a similar pattern with any other sources here outside of maybe a few with much more universal name recognition. It's always struck me as a little bit odd, but not a big enough deal for me to go out of my way to comment about it before now.
And what I mean by that is what evidence are either of us bringing that our claims are true?
Therefore I bring that I have no evidence at all about my claim above, and seeming you're in the same boat.
This said the website above lists
>I am not paid by any company for any opinion I post on Stratechery or in any public forum, including podcasts and Twitter.
>I do not hold individual stocks in any company I write about. I do hold various 401k and IRA accounts that invest in a wide-ranging basket of stocks, over which I have no control.
>I occasionally agree to speaking engagements for both public and private events, but not for companies I cover on Stratechery. Compensation will vary based on the nature of the customer and event, as well as the topic. I do not do any consulting at this time.
So you tell me.
>I pay for all of my own travel and expenses when I attend company events.
For those who ca read between the lines, here is my moment of realisation:
The way forward does not have to be all about the way forward. The narrative is key, and we need to change the narrative to to be positive so we may move forward without all the chaos and trauma.
AI can contribute to such a way instead of make things worse. I will have some ideas to share. I hope you do, too. We have a chance now. The fog of war is over. And I have to say Trump did what Biden refused to do. I didn’t vote for either, but I am glad as hell he stopped the carnage and found a way to move toward the future, beyond the hate and carnage.
I will bring some ideas forward on how AI can help with peace.
It’s definitely a point of deep recognition.
What happened is huge. We need to stand behind peace and progress.
I’m convinced.
Peace to all those who have lost their lives for us to get to this money, on both sides.
I read Stratechery. Ben's articles are what he makes for public consumption. This weekly summary thing is a new roundup for subscribers, and just happens to be public, and if you're not a subscriber you can't follow the links. If Ben could choose something to be #1 on Hacker News it would likely be a full article with this headline, rather than a weekly summary post for subscribers.
OpenAI has been at the top of the app store for years now. A lot of people are interested in it. That trivially explains the upvotes without a conspiracy.
Kudos to the headline writer on this one.