1. TPU's are a serious competitor to nvidia chips.
2. Chip makers with the best chips are valued at 1-3.5T.
3. Google's market cap is 2T.
4. It is correct for google to not sell TPU's.
i have heard the whole, its better to rent them thing, but if they're actually good selling them is almost as good a business as every other part of the company.
So, to help you understand how they can be true: market cap is governed by something other than what a business is worth.
As an aside, here's a fun article that embarrasses wall street. [0]
I guess that means don't take investment advice from me ;) I've done OK buying indices though.
Plenty of companies have screwed up execution, and the market has correctly noticed and penalized them for that.
P.S. I did not have access to internet in 2006, so I guess the skepticism was normal at the time.
Also, if they are so good, it's best to not level the playing field by sharing that with your competitors.
Also "chip makers with the best chips" == Nvidia, there aren't many others. And Alphabet does more than just produce TPUs.
Google is saving a ton of money by making TPUs, which will pay off in the future when AI is better monetized, but so far no one is directly making a massive profit from foundation models. It's a long term play.
Also, I'd argue Nvidia is massively overvalued.
google, who make AI chips with barely-adequate software, is worth 2.0T
AMD, who also make AI chips with barely-adequate software, is worth 0.2T
Google made a few decisions with TPUs that might have made business sense at the time, but with hindsight haven't helped adoption. They closely bound TPUs with their 'TensorFlow 1' framework (which was kinda hard to use) then they released 'TensorFlow 2' which was incompatible enough it was just as easy to switch to PyTorch, which has TPU support in theory but not in practice.
They also decided TPUs would be Google Cloud only. Might make sense, if they need water cooling or they have special power requirements. But it turns out the sort of big corporations that have multi-cloud setups and a workload where a 1.5x improvement in performance-per-dollar is worth pursuing aren't big open source contributors. And understandably, the academics and enthusiasts who are giving their time away for free aren't eager to pay Google for the privilege.
Perhaps Google's market cap already reflects the value of being a second-place AI chipmaker?
TPUs very much have software support, hence why SSI etc use TPUs.
P.S. Google gives their tpus for free at: https://sites.research.google/trc/about/, which I've used for the past 6 months now
Jax has a harsher learning curve than Pytorch in my experience. Perhaps it's worth it (yay FP!) but it doesn't help adoption.
> They don't really use pytorch from what I see on the outside from their research works
Of course not, there is no outside world at Google - if internal tooling exists for a problem their culture effectively mandates using that before anything else, no matter the difference in quality. This basically explains the whole TF1/TF2 debacle which understandably left a poor taste in people's mouths. In any case while they don't use Pytorch, the rest of us very much do.
> P.S. Google gives their tpus for free at: https://sites.research.google/trc/about/, which I've used for the past 6 months now
Right and in order to use it effectively you basically have to use Jax. Most researchers don't have the advantage of free compute so they are effectively trying to buy mindshare rather than winning on quality. This is fine, but it's worth repeating as it biases the discussion heavily - many proponents of Jax just so happen to be on TRC or have been given credits for TPU's via some other mechanism.
Or rather, there would be if TPUs were that good in practice. From the other comments it sounds like TPUs are difficult to use for a lot of workloads, which probably leads to the real explanation: No one wants to use them as much as Google does, so selling them for a premium price as I mentioned above won’t get them many buyers.
You're conflating price with intrinsic value with market analysis. All different things.
If interesting in further details:
1) TPUs are a serious competitor to Nvidia chips for Google’s needs, per the article they are not nearly as flexible as a GPU (dependence on precompiled workloads, high usage of PEs in systolic array). Thus for broad ML market usage, they may not be competitive with Nvidia gpu/rack/clusters.
2)chip makers with the best chips are not valued at 1-3.5T, per other comments to OC only Nvidia and Broadcomm are worth this much. These are not just “chip makers”, they are (the best) “system makers” driving designs for chips and interconnect required to go from a diced piece of silicon to a data center consuming MWs. This part is much harder, this is why Google (who design TPU) still has to work with Broadcomm to integrate their solution. Indeed every hyperscalar is designing chips and software for their needs, but every hyperscalar works with companies like Broadcomm or Marvell to actually create a complete competitive system. Side note, Marvell has deals with Amazon, Microsoft and Meta to mostly design these systems they are worth “only” 66B. So, you can’t just design chips to be valuable, you have to design systems. The complete systems have to be the best, wanted by everyone (Nvidia, Broadcomm) in order to be in Ts, otherwise you’re in Bs(Marvell).
4. I see two problems with selling TPU, customers and margins. If you want to sell someone a product, it needs to match their use, currently the use only matches Google’s needs so who are the customers? Maybe you want to capture hyperscalars / big AI labs, their use case is likely similar to google. If so, margins would have to be thin, otherwise they just work directly with Broadcomm/Marvell(and they all do). If Google wants everyone using cuda /Nvidia as a customer then you massively change the purpose of TPU and even Google.
To wrap up, even if TPU is good (and it is good for Google) it wouldn’t be “almost as good a business as every other part of their company” because the value add isn’t FROM Google in the form of a good chip design(TPU). Instead the value add is TO Google in form of specific compute (ergo) that is cheap and fast FROM relatively simple ASICs(TPU chip) stitched together into massively complex systems (TPU super pods).
Sorry that got a bit long winded, hope it’s helpful!
https://www.tomshardware.com/tech-industry/artificial-intell...
"Nvidia to consume 77% of wafers used for AI processors in 2025: Report...AWS, AMD, and Google lose wafer share."
My take is "sell access to TPUs on Google cloud" is the nice side effect.
this is correct but mis-stated - it's not the caches themselves that cost energy but MMUs that automatically load/fetch/store to cache on "page faults". TPUs don't have MMUs and furthermore are a push architecture (as opposed to pull).
For a company of the size of Google, the development costs for a custom TPU are quickly recovered.
Comparing a Google TPU with an FPGA is like comparing an injection-moulded part with a 3D-printed part.
Unfortunately, the difference in performance between FPGAs and ASICs has greatly increased in recent years, because the FPGAs have remain stuck on relatively ancient CMOS manufacturing processes, which are much less efficient than the state-of-the-art CMOS manufacturing processes.
While common folk wisdom, this really isn't true. A surprising number of products ship with FPGAs inside, including ones designed to be "cheap". A great example of this is that Blackmagic, a brand known for being a "cheap" option in cinema/video gear, bases everything on Xilinx/AMD FPGAs (for some "software heavy" products they use the Xilinx/AMD Zynq line, which combines hard ARM cores with an FPGA). Pretty much every single machine vision camera on the market uses an FPGA for image processing as well. These aren't "one in every pocket" level products, but they are widely produced.
> Unfortunately, the difference in performance between FPGAs and ASICs has greatly increased in recent years, because the FPGAs have remain stuck on relatively ancient CMOS manufacturing processes
This isn't true either. At the high end, FPGAs are made on whatever the best process available is. Particularly in the top-end models that combine programmable fabric with hard elements, it would be insane not to produce them on the best process available. What is the big hindrance with FPGAs is that almost by definition the cell structures needed to produce programability are inherently more complex and less efficient than the dedicated circuits of an ASIC. That often means a big hit to maximum clock rate, with resulting consequences to any serial computation being performed.
While it is true that cheap and expensive FPGAs exist, an FPGA system to replace TPU would not use a $0.50 or even $100 FPGA it would use a Versal or Ultrascale+ FPGA that costs thousands, compared to the (rough guess) $100/die you might spend for largest chip on most advanced process. Furthermore, overhead of FPGA means every single one my support a few million logic gates (maybe 2-5x if you use hardened blocks), compare to billions of transistors on largest chips in most advanced node —> cost per chip to buy is much much higher.
To the second point, afaik, leading edge Versal FPGAs are in 7nm, not ancient also not cutting edge used for asic(n3).
While TSMC 7 nm is much better than what most FPGAs use, it is still ancient in comparison with what the current CPUs and GPUs use.
Moreover, I have never seen such FPGAs sold for less than thousands of $, i.e. they are much more expensive than GPUs of similar throughput.
Perhaps they are heavily discounted for big companies, but those are also the companies which could afford to design an ASIC with better energy efficiency.
I always prefer to implement an FPGA solution over any alternatives, but unfortunately much too often the FPGAs with high enough performance have also high enough prices to make them unaffordable.
The FPGAs with the highest performance that are still reasonably priced are the AMD UltraScale+ families, made with a TSMC 14 nm process, which is still good in comparison with most FPGAs, but nevertheless it is a decade old manufacturing process.
FPGA's kind of sit in this very niche middle ground. Yes you can optimize your logic so that the FPGA does exactly the thing that your use case needs, so your hardware maps more precisely to your use case than a generic TPU or GPU would. But what you gain in logic efficiency, you'll lose several times over in raw throughput to a generic TPU or GPU, at least for AI stuff which is almost all matrix math.
Plus, getting that efficiency isn't easy; FPGAs have a higher learning curve and a way slower dev cycle than writing TPu or GPU apps, and take much longer to compile and test than CUDA code, especially when they get dense and you have to start working around gate timing constraints and such. It's easy to get to a point where even a tiny change can exceed some timing constraint and you've got to rewrite a whole subsystem to get it to synthesize again.
Solving AX=B can be done with Newton's method to invert A, which boils down to matmuls.
Matrix exponential is normally done with matmuls- the scale down, Taylor/Pade and square approach.
Why do you need Cholesky? It's typically a means to an end, and when matmul is your primitive, you reach for it much less often.
Eigendecomposition is hard. If we limit ourselves to symmetric, we could use a blocked Jacobi algorithm where we run a non-matmul Jacobi to do 128x128 off-diagonal blocks and then use the matmul unit to apply to the whole matrix- for large enough matrices, still bottlenecked on matmul.
SVD we can get from Polar decomposition, which has purely-matmul iterations, and symmetric eigendecomposition.
One does have to watch out for numerical stability and precision very carefully when doing all these!
If so, wild. That seems like overkill.
[0]: https://henryhmko.github.io/posts/tpu/images/tpu_tray.png
This is not the only way though. TPUs are available to companies operating on GCP as an alternative to GPUs with a different price/performance point. That is another way to get hands-on experience with TPUs.
Also greedy sampling considered harmful: https://arxiv.org/abs/2506.09501
From the abstract:
"For instance, under bfloat16 precision with greedy decoding, a reasoning model like DeepSeek-R1-Distill-Qwen-7B can exhibit up to 9% variation in accuracy and 9,000 tokens difference in response length due to differences in GPU count, type, and evaluation batch size. We trace the root cause of this variability to the non-associative nature of floating-point arithmetic under limited numerical precision. This work presents the first systematic investigation into how numerical precision affects reproducibility in LLM inference. Through carefully controlled experiments across various hardware, software, and precision settings, we quantify when and how model outputs diverge. Our analysis reveals that floating-point precision—while critical for reproducibility—is often neglected in evaluation practices."
It’s nitpicking for sure, but it causes real challenges for reproducibility, especially during model training.
TPUs share a similar lineage to the Groq TPU accelerators (disclaimer: I work at Groq) which are actually fully deterministic which means not only do you get deterministic output, you get it in a deterministic number of cycles.
There is a trade off though, making the hardware deterministic means you give up HW level scheduling and other sources of non-determinism. This makes the architecture highly dependent on a "sufficiently smart compiler". TPUs and processors like them are generally considered VLIW and are all similarly dependent on the compiler doing all the smart scheduling decisions upfront to ensure good compute/IO overlap and eliminating pipeline bubbles etc.
GPUs on the other hand have very sophisticated scheduling systems on the chips themselves along with stuff like kernel swapping etc that make them much more flexible, less dependent on the compiler and generally easier to reach a fairly high utilisation of the processor without too much work.
TLDR: TPUs MAY have deterministic cycle guarantees. GPUs (of the current generation/architectures) cannot because they use non-deterministic scheduling and memory access patterns. Both still produce deterministic output for deterministic programs.
The opposite situation is with AMD which are avoiding the mistakes of Google.
My hope though is that AMD doesn’t start to compete with cloud service providers, e.g. by introducing their own cloud.