https://www.smartcompany.com.au/startupsmart/firmus-raises-3...
What's worse is that OpenAI and the other AI companies are all intertwined. The chipmakers are invested in the datacenter operators are invested in the software guys. When the bubble implodes - and it will implode - the good will go down with the bad, and that's what makes a financial crash a true crash.
https://www.nbcnews.com/business/economy/openai-nvidia-amd-d...
How much is dumb money, though? That's the real question which remains highly relevant post dotcom bust.
It wouldn't surprise me if much of the $1tn most doesn't turn up and the bubble bursts before 1/10th or that becomes real.
Performative actions to drive up valuation and try and attract more investors absolutely feels bubbly to me.
1. Discounting products that are not only currently operating at a loss but are priced well below actual resourcing required to produce.
Or maybe not enough money soon enough, and at this scale that could be more of a disaster than it had to be.
So far it's not looking like a business boom much at all compared to the massive investment boom which is undeniable, and that's where a good amount of remaining prosperity is emanating from.
If you were a financial person wouldn't you figure there were a lot bigger bonuses by getting involved with the amount of cash flow being invested rather than the amount resulting from profits being made in AI right now?
The fair criticism of the infra $ is where the non-VC non-bank-loan cash stream is, but there could be a lot of B2B deals and e.g. Meta, TikTok and other behemoths do tend to make plenty of money and pay their bills, and have extreme thirst for more AI capacity.
Take Oracle for example (as a whole, not just OCI) - tons of customers who are paying for AI-enhanced enterprise products.
It's still the early days, as the cost of creating software continues to approach zero the rules will change in ways which are hard to predict. The effect this will have on other white collar industries is even more challenging to reason about.
NVIDIA's stock may eventually get decimated (but the company itself will be fine, they have a relatively low employee account and insane margins), the Coreweaves of the world are definitely leveraged plays on compute and may indeed end up being DotCom style busts, but a key difference is that the driving forces at the very top - the Microsofts and Amazons of the world - have huge free cash flows, real compute demand growth beyond the AI space, and fortress balance sheets.
"continues " is inaccurate. The cost of creating software is nowhere near approaching zero
In Sam's dreams, perhaps.
There is a commonly held belief that there is a level of compute (vaguely referred to as AGI) that would be extremely valuable and those companies may continue to rationally fund AI research as R&D though if the VC and loan funding dries up there will probably be serious fights with the accounting departments. It is good to point out that companies with huge war chests do seem poised to continue investing in this even if VC/etc dries up due to the lack of end-user profitability - it'll be an interesting shift but probably not as disastrous as the dot-com bubble burst was.
There's nVidia that we know (primarily graphics cards) and more like an investment firm "nVidia" these days. The stocks have grown so much that they are trying to turn their fortunes around (to sustain growth) by investing everywhere.
nVidia has invested in so many companies in the ecosystem and beyond.
If this means that compute is considerably cheaper for OpenAI, it's a win for them too. But that remains to be seen.
I can only imagine what could grow out of an oversupply of rack-space and electrical power generation, post-crash.
In population terms, Tasmania has 4.5 sheep per person, whereas Victoria has 1.9 sheep per person. NSW 0.28, SA 5.2, WA 3.3, QLD 0.4
[1] https://www.ga.gov.au/scientific-topics/national-location-in... [2] https://www.abs.gov.au/statistics/people/population/national... [3] https://www.wool.com/market-intelligence/sheep-numbers-by-st...
* Population numbers are one head per person, so actual numbers may vary for Tasmania ;-)
It's not a bad idea to put data centers here, but we really need a few more links out to the world from here.
kind of like Mississippi but without the tourists part
i hear people online punch down on mississippi all the time, and often they don't know anything about the state except whatever metric they've heard about from a headline. the rest of america isn't very far behind, and if you think the state of mississippi isn't a product of america as a whole then you are extremely mistaken. without the industrialized north you have no plantation economy and without the civil war you have no "dead last (or in the bottom 5 states) in virtually every possible positive metric."
i grew up there, attended public school all the way through my BA, and then spent significant time as a young adult there. based on the stereotypical assumptions, it might be shocking to the big brains on hacker news that somebody from mississippi is an audience of this website.
Agreed, people usually just say ‘lol MS is full of idiots, they’re bad at school’ instead of taking the time to understand why. It was more isolated than GA and LA (and AL), there was a higher ratio of slaves to freedmen in the antebellum period, the Delta was undeveloped so lots of impoverished people from across the south moved there to try and develop the land, to name a few reasons.
> i grew up there, attended public school all the way through my BA, and then spent significant time as a young adult there. based on the stereotypical assumptions, it might be shocking to somebody on hacker news that somebody on a similar enough intelligence level to be an audience of this website is from mississippi.
I can say that I don’t assume everyone from Mississippi is stupid, but the generalization about Mississippi that you related seems to be more common than it should be. I think a lot of it has to do with a lack of exposure to people from Mississippi or Mississippi itself.
Thanks for taking the time to respond, I appreciate the discussion.
https://www.clarionledger.com/story/news/2025/06/11/mississi...
get it? hahahaha
data centers being built but people fighting it. new rezoning in taylor, MS being fought by locals bc they are trying to reclassify agriculture land for heavy industry so they can build an asphalt plant. federal government troops in nearby memphis.
just like anywhere else in america.
And how long has this kind of thing probably been going on?
There’s also plenty more there than sheep and tourists (and not really that many of those)
But the vast majority of Tasmania's power is hydroelectric. Hydro is a much more desirable renewable than solar because it essentially is its own built in battery.
God knows how a datacentre would do down there....
All of the big players - Nvidia, OpenAI, Oracle, Microsoft - are in insane circular financing agreements that would make Enron executives blush.
However Zitron seems to have forgotten that Google exists or makes TPUs. He mentions Google only 10 times in the entire article, always in a minor way.
And Google is an advertising company. Mostly in search, and increasingly dependent on YouTube. Everything else is a net money loser, including Waymo, Gemini etc.
No business is going to run workloads on OCI outside of ones running Oracle. They a They are a way distance fourth in cloud. I’ve been working in cloud consulting for five years including the first three directly at AWS (Professional Services). No one worried about having talking points about competing against Oracle.
Microsoft, Google and Amazon have both internal products that can benefit from inference and cloud hosting.
Google also has GCP and unlike OpenAI who is dependent on VC funding and Oracle who is borrowing money. Google throws off cash like crazy and self funds its infrastructure which is already better than everyone else’s
Only when it comes to their TPUs, and sometimes that one thing may just be the difference to push them over the hump.
Per-token cost-wise, TPUs (& specialized processors in general) will beat GPUs every time. The efficiency difference between the 2 types is never to be ignored, & is likely why they can shotgun it everywhere.
> And Google is an advertising company. Mostly in search, and increasingly dependent on YouTube. Everything else is a net money loser, including Waymo, Gemini etc.
1) Each venture should be treated as a (relatively) isolated vertical slice
2) 9 out of 10 times, a venture just doesn't break even. That's just the nature of the business.
In almost all scenarios, a setup with this incentive structure will lead to massive adoption. It's too tempting, and with most jobs / political positions being short term (<5yrs) ones, people optimize for their time in that timeframe, not longer.
Boards will pursue stock buybacks (short term growth, long term may cause trouble if there's a downturn), banks will lend out subprime mortgages (hit your sales numbers in the short term, at the cost of long term risk), etc etc.
This situation is no different. There's money flowing in and there's less red tape since everyone is being pressured to allow it. It might work out in the long term, it might not, but it will 100% benefit those who push it in the short term. People will get promoted for driving a new data center, politicians can promote more jobs being added, everybody wins... for now.
The future economic aspect becomes irrelevant when the short term candy is sweet enough.
Each of the main providers could easily use 10x the compute tomorrow (albeit arguably inefficiently) by using more thinking for certain tasks.
Now - does that scale to the 10s of GWs of deals OpenAI is doing? Probably not right now, but the bigger issue as the article does point out in fairness is the huge backlog of power availability worldwide.
Finally, AI adoption outside of software engineering is incredibly limited at work. This is going to rapidly change. Even the Excel agent Microsoft has recently launched has the potential to result in hundred fold increases in token consumption per user. I'm also suspect of the AI sell through rate being an indicator that it's not popular for Microsoft. The later versions of M365 copilot (or whatever it is called today) are wildly better than the original ones.
It all sort of reminds me of Apple's goal of getting 1% in cell phone market share, which seemed laughably ambitious at one point - a total stretch goal. Now they are up to 20% and smartphone penetration as a whole is probably close to 90% globally of those that have a phone.
One potential wild card though for the whole market is someone figuring out a very efficient ASIC for inference (maybe with 1.58bit). GPUs are mostly overkill for inference and I would not be surprised if 10-100x efficiency gains could be had on very specialised chips.
customer value must eventually flow out of those datacenters in the opposite direction to the the energy and capex that are flowing in
do people actually want all this AI? I see studio ghibli portraits, huge amounts of internet spam, workslop... where is the value?
That's true for everyone with regard to any resource.
The question is whether the 10x increase in resources results in 10x or more increase in profit.
If it doesn't then it doesn't make sense to pay for the extra resources. For AI right now, the constraint is profit per resource unit, not number or resource units.
I find AI agents work very poorly within the Microsoft ecosystem. They can generate great HTML documents (because it's an open source format maybe?) but for word documents, the formatting is so poor I'd had to turn it off and just do things manually.
"If we end up misspending a couple of hundred billion dollars, I think that is going to be very unfortunate, obviously. But I actually think the risk is higher on the other side . If you build too slowly and superintelligence is possible in 3 years, but you built it out assuming it is possible in 5 years, then you are out of position on the most important technology."
His assumption is that superinteligence is close, its just a question of whether it is 3 or 5 years!
This made me think Zuck sees it as a question of when rather than if. I.e its more a question of 3 vs 5 years rather than possible vs non possible.
Pure hucksterism.
* questionable demand for AI products make the DC investments risky (makes sense)
* DCs being built in “remote” areas with cheap land may become obsolete/replaced by other DCs making use of said cheap land (questionable; DCs can upgrade cheaper than building a new one)
* financing for DCs used to be from Big Tech but is now spread out among private equity, sovereign wealth funds etc increasing the exposure of the economy to failure of these investments (again, questionable, unless they are being financed by bank loans)
The most salient concern seems to be a lack of demand. I don’t see why that would change in the future.
This is notably very different from the dot-com build out of dark fibre, where digging the holes cost the vast majority of the money, and the fibres and network equipment cost very little in comparison.
We spoke quite a bit about this project, circular money and things and the impression I got is that OCI is just grabbing money while the perpetuum mobile is unrolling.
I am looking forward to hearing how the trip went when I see them next.
Oracle Sinks on Report Its Cloud Margins Are Lower Than Expected
Jeran Wittenstein October 7, 2025 at 6:06 PM GMT+2
Oracle Corp. shares tumbled after a report that the software maker’s profit margin in its cloud computing business is lower than many on Wall Street have been estimating.
While Oracle generated roughly $900 million in revenue from the rental of servers powered by Nvidia Corp. chips during the three months ended in August, the company only managed about $125 million in gross profit, the Information reported, citing internal corporate documents.
This is the engineering perspective, not the finance perspective. As an engineer holding an MBA, I've made the argument countless times in BigCo to move from cloud deployments to on-prem. When you're a startup, you often simply don't have the cash on hand to make the capital expenditure to build out datacenter capacity, especially with an uncertain (but hopefully high) expected rate of growth. When you're a BigCo, the script flips; you have plenty of cash on hand and you want to improve overall profitability, which is done by using capital expenditures to reduce operating expenditures, i.e. funding datacenter build-outs to reduce cloud bills.
> experts
Companies can hire experts and can still out-source to colos if they prefer. This is a question of political will and risk analysis.
Actually big companies prefer capex to opex and cloud pushed them in the opposite direction of what they'd naturally prefer. But the other advantages you cite + hype overruled the liability of switching from onprem capex to opex.
I started with mainframes and I'm not going back.
We detached this comment from https://news.ycombinator.com/item?id=45510207 and marked it off topic.