For anyone who doesn't follow AMD at all (good move, their consumer support for compute leaves scars) they appear to have a strategy of targeting the server market in hopes of scooping out the high-profit part of the GPGPU world. Hopefully that does well for them, but based on my years of regret at being an AMD customer watching the AI revolution zoom by, I'd be hesitant about that translating to good compute experiences on consumer hardware. I assume the situation is much improved from what I was used to, but I don't trust them to see supporting small users as a priority.
Clearly there are workloads AMD wins at, and just going Nvidia by default for everything without considering AMD is suboptimal.
Consumers, open-source solutions and smaller companies unfortunately can't afford this, so they would be dependent fully on AMD and other providers to solve this implementation gap; so ironically smaller companies may prefer to use Nvidia just so they don't have to worry about odd GPU driver issues for their workloads.
When looking at the market cap, there are three main pillars of valuation - revenue growth, profit growth, and net income. If all three are growing, you are an industry darling. If two are growing, you are still likely to be valued highly. If you have only one, you are much riskier. If you have none, it's a red flag.
As of the latest earnings report, AMD profit, revenue and net income are all increasing. Intel, they are all decreasing. If analysts assume trends hold, AMD can grow into its valuation and Intel is currently heading towards being worth nothing unless they change their business. Simply put, a business that is losing all three of revenue, profit margin, and net income is simply headed on the wrong path for investors, and will be punished in an outsized way when it comes to predicting it's future value (ie, market cap).
For datacenter GPUs, they're going from ~500M-750M in 2023 full year (can't find proper numbers), to 4.5B+ full year 2024. In GPUs, it's almost like they're entering a new market.
The current Instinct line of products is relatively new too, I found this article [1] on the MI100 launch on Nov, 2020. That's basically start of 2021.
To go from MI100 in 2021, to 4.5B+ of MI300X + MI250X in 2024 is great. They are doing just fine.
On MI355X, I can't find endnotes for the slides they show, but it is not clear if the 9.2PF of FP6 and FP4 is sparse or not (all the other numbers on that slide were non-sparse). If it isn't they're exceeding GB200's sparse FP6/4 numbers with non-sparse flops (!). They both have the same memory bandwidth though. AMD is doing just fine.
[1] https://www.servethehome.com/amd-radeon-instinct-mi100-32gb-...
[1] https://www.techpowerup.com/gpu-specs/radeon-instinct-mi25.c...
But, what difference does it make? Nvidia also shipped the same _architecture_ for their datacenter and consumer cards for quite a few generations back then (e.g. Pascal), though typically not the same die. Whether they reuse the same architecture or not, they had a product that they marketed as enterprise/datacenter cards. The buyers don't care if it's a rebranded consumer card or not as long as it works well - see the Nvidia L40S (uses AD102 - same as RTX4090 [2]) which is very popular in inference.
Not to mention, with GCN, AMD made an explicit bet on unifying their architecture for compute & graphics. They bet on being able to supply both the consumer and datacenter markets with the same silicon by coming up with graphics hardware that was quite compute-heavy (hence why AMD consumer cards were stronger against their Nvidia counterparts until the Ampere generation or so).
Anyway, I don't understand what you want from me or are arguing about. They were trying to win the datacenter CPU market and not the GPU market. They did well at that. They've recently started trying to win the GPU market as well, cause now they can afford to. They seem to be doing well now.
Quick thing to show the sheer scale of these figures. This is 10^15 operations per second, and if you sit a foot from your screen that takes light about a nanosecond to reach you. That means that from the light leaving your screen to it hitting your eyeballs these things can have done another million calculations.
I know this isn't particularly constructive, but I'm hit with waves of nostalgia and older performance figures seeing this.