There were not with WeWork.
The SpaceX/xAI IPO will be more interesting.
xAi isn't even a point of discussion.. it's just a scheme to rip off investors.
WeWork.. hard to take anyone seriously that ever invested in this bad boy.
The EV market was mighty small when Tesla started too.
Skate to where the puck is going.
Masayoshi Son may not be providing returns for its investors but he is providing entertainment for the rest of the world.
As for OpenAI, I'm not sure if Altman is an idiot or fraudster, claims about reaching AGI/ASI with scaling and investing in that fashion was always delusional at best or fraudulent at worst, maybe he just hoped to divert enough money to engineers to make actual breakthroughs or that the hardware would become a moat but competitors have kept pace, and I fully agree that they are mostly now only hanging on with an insanely bad cost structure now.
[1] https://www.ft.com/content/90aa74a5-b39d-4131-a138-367726cb1...
Easy, they just have to sell their overpriced vram chips (which haven't been manufactured yet), from their GPUs (which haven't been bought yet) which are in their data centers (the ones they're planning to build "soon"). It really isn't rocket science
Are we acting like this is a low probability outcome?
I like his podcast, Better Offline[0]. Some here might also like it, some would definitely hate it. He's not right about everything he says, but I agree with a lot of it. He has a newsletter for those who don't like podcasts.
I went with a bit of Roche, Novartis, ... So something that would at least cushion the fall with dividends and not being in the GenAI crossfire since they definitely use AI/ML (I got them through an ETF). Also almost all my assets are now either CHF or Euro denominated/hedged. I am also not comfortable with the dollar weakening and the next Fed head probably cutting rates again like Trump wishes
Okay
Nvidia might have an ok P/E right now, but the question is if the industry can sustain buying over $50B of GPUs every quarter(or that it even needs to).
Will everyone just accept negative ROI in the name of hype? Will scalers be able to meaningfully increase service prices without eroding customer interest?
These are all unanswered questions that a simple PE statement can't support.
I rebalanced the same as I always have.
Also, there aren't a ton of great options that are safer.
Then again, I'm 20+ years from accessing it, so I figure I'm about 5 years out from moving more to S&P tracking and bonds. I am not a financial advisor.
This is the ONE thing you aren’t supposed to do as a passive investor. A play like this will cause you to lose upside almost always, and some people never get back in and miss out on almost a lifetime of growth.
THE MARKETS ARE NOT RATIONAL.
It isn't valid. It's being a contrarian, and not the outgrowth of some logical process.
https://www.yahoo.com/news/articles/openai-chief-sam-altman-...
https://www.nytimes.com/2023/05/16/technology/openai-altman-...
No disrespect, just sharing my thoughts. I see “Elmo Musk” and “Orange Man” etc., and I immediately think this is not worth reading (regardless of my opinion of those persons).
He 100% deserves his title for Loopt alone. I don't have any credibility to being with, this is an anonymous wantrepreneur shitposting forum not the US congress. I know people lurking here love these sociopaths but come on...
I have the disks but only random old gaming PCs to put them in. I think I'm going to expand my Proxmox cluster and run Ceph so that I don't have to pay that 6x markup or whatever the fuck it is these days.
I'm tired, man. I'm tired of living in this world where AI is simultaneously an unstoppable eschatological juggernaut that's already making everything worse and at best is going to steal my livelihood and destroy my family's future, but also a hype driven shell game with full buy-in from world leaders and the moneyed elite who see a golden opportunity to extract unprecedented amounts of wealth for themselves before the West falls and they have to make other arrangements.
It’s all fun and games till it’s not. All this capital investment is going to start hitting earnings as massive deprecation and/or mark-to-market valuation adjustments and if the bubble pops (or even just cools a bit) the math starts to look real ugly real quick.
Great promise, replace all your call centre staff, then your developers with AI. It is cheaper, but only because the AI companies are not charging you what it really costs to do the work.
The fully allocated cost of one call to a human agent is $3-5. That pays for a lot of inference.
I calculated the full cost of a call to be $0.05 cents a minute when handled by AI and that includes charges for a 1800 number and the varios other AWS services it uses.
Nova does the processing of the text after a separate voice to text service (Lex)
OpenAi gets 30b, buys chips from nvidia for 30b.
How is that an investment?
It's why Amodei has spoken in favor of stricter export controls and Altman has pushed for regulation. They have no moat.
I'm thankful for the various open-weighted Chinese models out there. They've kept good pace with flagship models, and they're integral to avoiding a future where 1-2 companies own the future of knowledge labor. America's obsession with the shareholder in lieu of any other social consideration is ugly.
Everyone else has to build infrastructure. Google just had to build a single part, really, and already had the software footprint to shove it everywhere - and the advertising data to deliver features that folks actually wanted, but could also be monetized.
But when you think about it it's actually a bit more complex. Right now (eg) OpenAI buys GPUs from (eg) NVidia, who buys HBM from Samsung and fabs the card on TSMC.
Google instead designs the chip, with I assume a significant amount of assistance of Broadcom - at least in terms of manufacturing, who then buys the HBM from the same supplier(s) and fabs the card with TSMC.
So I'm not entirely sure if the margin savings are that huge. I assume Broadcom charges a fair bit to manage the manufacturing process on behalf of Google. Almost certainly a lot less than NVidia would charge in terms of gross profit margins, but Google also has to pay for a lot of engineers to do the work that would be done in NVidia.
No doubt it is a saving overall - otherwise they wouldn't do it. But I wonder how dramatic it is.
Obviously Google has significant upside in the ability to customise their chips exactly how they want them, but NVidia (and to a lesser extent) AMD probably can source more customer workflows/issues from their broader set of clients.
I think "Google makes its own TPUs" makes a lot of people think that the entire operation in house, but in reality they're just doing more design work than the other players. There's still a lot of margin "leaking" through Broadcom, memory suppliers and TSMC so I wonder how dramatic it is really is
They've obviously adapted the design but it's a risk optimising in hardware like that - if there is another model architecture jump the risk of having a narrow specialised set of hardware means you can't generalise enough.
It'll be true, everyone does see it coming (just like with rare earth minerals). But the market-infected Western society doesn't have the maturity to do anything about it. Businesses won't because they're expected to optimize for short-term financial returns, government won't because it's hobbled because biases against it (e.g. any failure becomes a political embarrassment, and there's a lot of pressure to stay out of areas where businesses operate and not interfere with businesses).
America needs a lot more strategic government control of the economy, to kick businesses out of their short-term shareholder-focused thinking. If it can't manage that, it will decline into irrelevance.
For now people identify LLMs and AI with the ChatGPT brand.
This seems like it might be the stickiest thing they can grab ahold of in the long term.
But I tend to agree that the ultimate winner is going to be Google. Maybe Microsoft too.
Unless you're totally dumb or a super genius, LLMs can easily provide that kind of monthly value to you. This is already true for most SOTA models, and will only become more true as they get smarter and as society reconfigures for smoother AI integration.
Right now we are in the "get them hooked" phase of the business cycle. It's working really damn well, arguably better than any other technology ever. People will pay, they're not worried about that.
Plans with unlimited talk/text and 5GB+ of data have been available for <$30 for over a decade now.
The AI labs are not worried.
In the world where you cheap open weight models and free tier closed sources models are flooding the market, you need very good reason to convince regular people to pay for just certain models en masse in b2c market
"Regular People" know ChatGPT. They know Gemini (largely because google shoves it in their face). They don't know anything else (maybe Siri, because they don't know the difference, just that siri now sucks). I'm not sure if I would count <0.1% of tokens generated being "flooding the market".
Just like you don't give much thought to the breed of grass growing in your yard, they don't give much thought to the AI provider they are using. They pay, it does what they want, that's the end of it. These are general consumers, not chronically online tech nerds.
You need to install linux and actively debugging it. For ai, regular people can just easily switch around by opening an browser. There are many low or 0 barrir choices. Do you know windows 11 is mostly free too for b2c customers now? Nobody is paying for anything
> "Regular People" know ChatGPT. They know Gemini (largely because google shoves it in their face). They don't know anything else (maybe Siri, because they don't know the difference, just that siri now sucks). I'm not sure if I would count <0.1% of tokens generated being "flooding the market".
You just proved my point. Yes they are good, but why would people pay for it? Google earns money through ads mostly.
> Just like you don't give much thought to the breed of grass growing in your yard, they don't give much thought to the AI provider they are using. They pay, it does what they want, that's the end of it. These are general consumers, not chronically online tech nerds.
That's exactly the points, because most of the internet services are free. Nobody is paying for anything because they are ads supported
I really dislike Google, but it is painfully obvious they won this. Open AI and Anthropic bleed money. Google can bankroll Gemini indefinitely because they have a very lucrative ad business.
We can't even argue that bankrolling Gemini for them is a bad idea. With Gemini they can have yet another source of data to monetize users from. Technically Gemini can "cost" them money forever, and it would still pay for itself because with it they can know even more data about users to feed their ad business with. You tell LLMs things that they would never know otherwise.
Also, they mostly have the infrastructure already. While everyone spends tons of money to build datacenters, they have those already. Hell, they even make money by renting compute to AI competitor.
Barred some serious unprecedented regulatory action against them (very unlikely), I don't see how they would lose here.
Unfortunately, I might add. i consider Google an insidiously evil corporation. The world would be much better without it.
I'm not using Google services much at all and I don't use Gemini but I'm sure it will serve the users well. I just don't want to be datamined by a company like Google. I don't mind my data improving my services but I don't want it to be used against me for advertising etc.
> This seems like it might be the stickiest thing they can grab ahold of in the long term.
For now, but do you still Xerox paper?
Then it's doomed. Which is also my opinion, I don't disagree at all with you.
The only reason it's expected now is because of a slow boil.
Google has been guilty of all of the same crimes, but it bothers me to see new firms pop up with the same rapacious strategies. I hope Anthropic and OpenAI suffer.
Google's best trick was skirting the antitrust ruling against them by making the judge think they'd "lose" AI. What a joke.
Meanwhile they're camping everyone's trademarks, turning them into lucrative bidding wars because they own 92% of the browser URL bars.
Try googling for Claude or ChatGPT. Those companies are shelling out hundreds of millions to their biggest competitor to defend their trademarks. If they stop, suddenly they lose 60% of their traffic. Seems unfair, right?
What I mean is that I am so bitter about OpenAI and Anthropic's social media manipulation and the effects of AI psychosis on the people around me that I would gladly accept a worse future and a less free society just to watch them suffer.
Anthropic and Dario Amodei are undoubtedly bigger scammers IMO.
Because recent open source models have reached my idea of "enough". I just want the bubble to burst, but I think the point of the bubble burst is that Anthropic and OpenAI couldn't survive whereas Google has chances of survival but even then we have open source models and the bubble has chances of reducing hardware costs.
OpenAI and Anthropic walked so that Google or Open source models could run but I wish competition and hope that maybe all these companies can survive but the token cost is gonna cost more, maybe that will tilt things more towards hardware.
I just want the bubble to burst because the chances of it prolonging would have a much severe impact than what improvements we might see in Open source models. And to be quite frank, we might be living an over-stimulus of "Intelligence", and has the world improved?
Everything I imagined in AI sort of reached and beyond and I am not satisfied with the result. Are you guys?
I mean, now I can make scripts to automate some things and some other things but I feel like we lost something so much more valuable in the process. I have made almost all of my projects with LLM's and yet they are still empty. Hollow.
So to me, the idea of bursting the bubble is of the utmost importance now because as long as the bubble continues, we are subsiziding the bubble itself and we are gonna be the one who are gonna face the most impact, and well already are facing it.
in hindsight, I think evolution has a part in this. We humans are so hard coded to not get outside of the tribe/the-newest-thing so maybe collectively us as a civiliazation can get dis-enchanted first via crypto now AI but we also can think for ourselves and the civilization is built from us in my naive view.
So the only thing we can do is think for ourselves and try to learn but it seems as if that's the very thing AI wants to offload.
Have a nice day.
I would be really curious to know what tools you've tried and are using where gemini feels better to use
Last night I was happily coding away with Codex after writing off Gemini CLI yet again due to weirdness in the CLI tooling.
I ran into a very tedious problem that all of the agents failed to diagnose and were confidently patching random things as solutions back and forth (Claude Code - Opus 4.6, GPT-5.3 Codex, Gemini 3 Pro CLI).
I took a step back, used python script to extract all of the relevant codebase, and popped open the browser and had Gemini-3-Pro set to Pro (highest) reasoning, and GPT-5.2 Pro crunch on it.
They took a good while thinking.
But, they narrowed the problem down to a complex interaction between texture origins, polygon rotations, and a mirroring implementation that was causing issues for one single "player model" running through a scene and not every other model in the scene. You'd think the "spot the difference" would make the problem easier. It did not.
I then took Gemini's proposal and passed it to GPT-5.3-Codex to implement. It actually pushed back and said "I want to do some research because I think there's a better code solution to this". Wait a bit. It solved the problem in the most elegant and compatible way possible.
So, that's a long winded way to say that there _is_ a use for a very smart model that only works in the browser or via API tooling, so long as it has a large context and can think for ages.
In a lot of industries, you can't afford this anyway, since all code has to be carefully reviewed. A lot of models are great when you do isolated changes with 100-1000 lines.
Sometimes it's okay to ship a lot of code from LLMs, especially for the frontend. But, there are a lot of companies and tasks where backend bugs cost a lot, either in big customers or direct money. No model will allow you to go wild in this case.
Coding has been vastly improved in 3.0 and 3.1, but Google won't give us the full juice as Google usually does.
Google has the datasets, the expertise, and the motivation.
- yes
- yes (not quite as good as CC/Codex but you can swap the API instead of using gemini-cli)
- same stuff as them
- better than others, google got long (1mm) context right before anyone else and doesn't charge two kidneys, an arm, and a leg like anthropic
but claude and claude code are different things
Gemini 3.1 (and Gemini 3) are a lot smarter than Claude Opus 4.6
But...
Gemini 3 series are both mediocre at best in agentic coding.
Single shot question(s) about a code problem vs "build this feature autonomously".
Gemini's CLI harness is just not very good and Gemini's approach to agentic coding leaves a lot to be desired. It doesn't perform the double-checking that Codex does, it's slower than Claude, it runs off and does things without asking and not clearly explaining why.
>it's [Gemini] nowhere near claude opus
Could you be a bit more specific, because your sibling reply says "pretty close to opus performance" so it would help if you gave additional information about how you use it and how you feel the two compare. Thanks.
On top of every version of Gemini, you also get both Claude models and GPT-OSS 120B. If you're doing webdev, it'll even launch a (self-contained) Chrome to "see" the result of its changes.
I haven't played around Codex, but it blows Claude Code's finicky terminal interface out of the water in my experience.
One thing I don’t get though, if superintelligence is really 5 years away, what’s going to be the point of a fixed-interest 100y bond.
openai and anthropic know already what will happen if they go public :)
Google's main revenue source (~ 75%) is advertising. They will absolutely try to shove in ads into their AI offerings. They simply don't have to do it this quickly.
But, also, probably google.
Google/Apple/Nvidia - those with warchests that can treat this expenditure as R&D, write it off, and not be up to their eyeballs in debt - those are the most likely to win. It may still be a dark-horse previously unknown company but if it is that company will need to be a lot more disciplined about expenditures.
RAM shortage is probably a bubble indicator itself. That industry doesn’t believe enough in the long term demand to build out more capacity.
Plus producers will now feel free to expand production and dump even more onto the market. This is great if you needed that amount of supply, but it's terrible if you were just trying to deprive others.
I don't think this is accurate. Maybe it will change in the future but it seems like the Chinese models aren't keeping up with actually training techniques, they're largely using distillation techniques. Which means they'll always be catching up and never at the cutting edge. https://x.com/Altimor/status/2024166557107311057
You link to an assumption, and one that's seemingly highly motivated.
Have you used the Chinese models? IMO Kimi K2.5 beats everything but Opus 4.6 and Gemini 3.1... and it's not exactly inferior to the latter, it's just different. It's much better at most writing tasks, and its "Deep Research" mode is by a wide margin the best in the business. (OpenAI's has really gone downhill for some reason.)
(I work at OpenAI, but on the infra side of things not on models)
That's pretty cutting edge to me.
EDIT: It's not a swarm — it's closer to a voting system. All three models get the same prompt simultaneously via parallel API calls (OpenAI-compatible endpoints), and the system uses weighted consensus to pick a winner. Each model has a weight (e.g. step-3.5-flash=4, kimi-k2.5=3, glm-5=2) based on empirically observed reliability.
The flow looks like:
1. User query comes in
2. All 3 models (+ optionally a local model like qwen3-abliterated:8b) get called in parallel
3. Responses come back in ~2-5s typically
4. The system filters out refusals and empty responses
5. Weighted voting picks the winner — if models agree on tool use (e.g. "fetch this URL"), that action executes
6. For text responses, it can also synthesize across multiple candidates
The key insight is that cheap models in consensus are more reliable than a single expensive model. Any one of these models alone hallucinates or refuses more than the quorum does collectively. The refusal filtering is especially useful — if one model over-refuses, the others compensate.Tooling: it's a single Python agent (~5200 lines) with protocol-based tool dispatch — 110+ operations covering filesystem, git, web fetching, code analysis, media processing, a RAG knowledge base, etc. The quorum sits in front of the LLM decision layer, so the agent autonomously picks tools and chains actions. Purpose is general — coding, research, data analysis, whatever. I won't include it for length but I just kicked off a prompt to get some info on the recent Trump tariff Supreme Court decision: it fetched stock data from Benzinga/Google Finance, then researched the SCOTUS tariff ruling across AP, CNN, Politico, The Hill, and CNBC, all orchestrated by the quorum picking which URLs to fetch and synthesizing the results, continuing until something like 45 URLs were fully processed. Output was longer than a typical single chatbot response, because you get all the non-determinism from what the models actually ended up doing in the long-running execution, and then it needs to get consensus, which means all of the responses get at least one or N additional passes across the other models to get to that consensus.
Cost-wise, these three models are all either free-tier or pennies per million tokens. The entire session above (dozens of quorum rounds, multiple web fetches) cost less than a single Opus prompt.Artificial Analysis isn't perfect, but it is an independent third party that actually runs the benchmarks themselves, and they use a wide range of benchmarks. It is a better automated litmus test than any other that I've been able to find in years of watching the development of LLMs.
And the gap has been rapidly shrinking: https://www.youtube.com/watch?v=0NBILspM4c4&t=642s
As I said, I have been following this stuff closely for many years now. My opinion is not informed just by looking at a single chart, but by a lot of experience. The chart is less fishy than blanket statements about the closed models somehow being way better than the benchmarks show.
It’s much better than the previous open models but it’s not yet close.
LLMs are useful and these companies will continue to find ways to capture some of the value they are creating.
On the user side, memory and context, especially as continual learning is developed, is pretty valuable. I use Claude Code to help run a lot of parts of my business, and it has so much context about what I do and the different products I sell that it would be annoying to switch at this point. I just used it to help me close my books for the year, and the fact that it was looking at my QuickBooks transactions with an understanding of my business definitely saved me a lot of time explaining.
On the enterprise side, I think businesses are going to be hesitant to swap models in and out, especially when they're used for core product functionality. It's annoying to change deterministic software, and switching probabilistic models seems much more fraught.
I am yet to see in-depth analysis that supports this claim
Anthropic on the other hand is very capable and given the success of claude code and cowork, I think they will maintain their lead across knowledge work for a long time just by having the best data to keep improving their models and everything around. It's also the hottest tech conpany rn, like Google were back in the day.
If I need to bet on two companies that will win the AI race in the west, it's Anthropic and Google. Google on the consumer side mostly and Anthropic in enterprise. OpenAI will probably IPO soon to shift the risk to the public.
Edit: one thing I didn’t think about is Anthropic more or less runs at the pleasure of AWS. Of Amazon sees Anthropic as a threat to AWS then it could be lights out.
OpenAI I'm sorry to say are all over the place. They're good at what they do, but they try to do too much and need near ponzi style growth to sustain their business model.
Anthropic has actually cracked Agentic AI that is generally useful. No other company has done that.
Similarly, OpenAI has made some massive investments in AMD hardware, and have also ensured that they aren't tied to nvidia.
I think it's nvidia that has less of a moat than many imagine they do, given that they're a $4.5T company. While small software shops might define their entire solution via CUDA, to the large firms this is just one possible abstraction engine. So if an upstart just copy pastes a massive number of relatively simple tensor cores and earns their business, they can embrace it.
At first the answer was "I can't say anything that might hurt people" but with a little persuasion it went further.
The answer wasn't the current official answer but way more nuanced that Wikipedia's article. More in the vein of "we don't know for sure", "different versions", "external propaganda", "some officials have lied and been arrested since"
In the end, when I asked whether I should trust the government or ask for multiple source, it strongly suggested to use multiple sources to form an opinion.
= not as censored as I expected.
Enterprise customers will gladly pay 10x to 20x for American models. Of course this means American tech companies will start to fall behind, combined with our recent Xenophobia.
Almost all the top AI researchers are either Chinese nationals or recent immigrants. With the way we've been treating immigrants lately ( plenty of people with status have been detained, often for weeks), I can't imagine the world's best talent continuing to come here.
It's going to be an interesting decade y'all.