It has been a bit of a nightmare and had to package like 30+ deps and their heavily customized LLVM, but got the runtime to build this morning finally.
Things are looking bright for high security workloads on AMD hardware due to them working fully in the open however much of a mess it may be.
It truly is a nightmare to build the whole thing. I got past the custom LLVM fork and a dozen other packages, but eventually decided it had been too much of a time sink.
I’m using llama.cpp with its vulkan support and it’s good enough for my uses. Vulkan so already there and just works. It’s probably on your host too, since so many other things rely on it anyway.
That said, I’d be curious to look at your build recipes. Maybe it can help power through the last bits of the Alpine port.
You don't trust Nvidia because the drivers are closed source ?
I think Nvidia's pledged to work on the open source drivers to bring them closer to the proprietary ones.
I'm hopping Intel can catch up , at 32GB of VRAM for around 1000$ it's very accessible
...I have a feeling you might not be at liberty to answer, but... Wat? The hell kind of "I must apparently resist Reflections on Trusting Trust" kind of workloads are you working on?
And what do you mean "binaries only built using a single compiler"? Like, how would that even work? Compile the .o's with compiler specific suffixes then do a tortured linker invo to mix different .o's into a combined library/ELF? Are we talking like mixing two different C compilers? Same compiler, two different bootstraps? Regular/cross-mix?
I'm sorry if I'm pushing for too much detail, but as someone whose actually bootstrapped compilers/user spaces from source, your usecase intrigues me just by the phrasing.
Beyond the fact they're competing with the most valuable companies in the world for talent while being less than a decade past "Bet the company"-level financial distress?
I would love AMD to be competitive. The entire industry would be better off if NVIDIA was less dominant. But AMD did this to themselves. One hundred percent.
It's not ideal. CUDA for comparison still supports Turing (two years older than RDNA 2) and if you drop down one version to CUDA 12 it has some support for Maxwell (~2014).
I am critical of AMD for not fully supporting all GPUs based on RNDA1 and RDNA2. While backwards compatibility is always better than less for the consumer, the RX 580 was a lightly-updated RX 480, which came out in 2016. Yes, ROCm technically came out in 2016 as well, but I don't mind acknowledging that it is a different beast to support the GCN architecture than the RDNA/CDNA generations that followed (Vega feels like it is off on an island of its own, and I don't even know what to say about it).
As cool as it would be to repurpose my RX 580, I am not at all surprised that GCN GPUs are not supported for new library versions in 2026.
I would be MUCH more annoyed if I had any RDNA1 GPU, or one of the poorly-supported RDNA2 GPUs.
Vulkan backend of Ollama works fine for me, but it took one year or two for them to officially support it.
A few years ago I thought I had used the ROCm drivers/libraries with hashcat on a RX580
Now it’s obsolete ?
"Anush's success is due to opting out of internal bureaucracy than anything else. most Claude use at AMD goes through internal infrastructure that can take hundreds of seconds per response due to throttling. Anush got us an exemption to use Anthropic directly. he is also exempt from normal policies on open source and so I can directly contribute to projects to add AMD support. He's an effective leader and has turned ROCm into a internal startup based in California. Definitely worth joining the team even if you've heard bad things about AMD as a whole."
This kind of bullshit is why I don't want to join AMD, even if this particular team is temporarily exempt from it.
It's crazy that this is a big deal.
I understand the need for some kind of governance around this but for it to require a special exemption just shows how far the AMD culture needs to shift.
I don't think this is true. ROCm is a huge advantage for Nvidia but as far as I can tell it is more a set of R&D libraries than anything else, so all the Hot New Stuff keeps being Nvidia first and only (to start with) as the library ecosystem for the hotness doesn't exist yet. Then eventually new libraries are created that are CUDA independent and AMD turns out to make pretty good graphics cards.
I wouldn't be surprised of ROCm withered on the vine and AMD still does fine.
Meanwhile nvidia just dropped CUDA/driver support for 1xxx series cards from their most recent drivers this year.
For me ROCm's mayfly lifetime is a dealbreaker.
Seems like they're making some effort in that direction at least. If you have specific concerns, maybe try hitting up Anush Elangovan on Twitter?
https://rocm.docs.amd.com/en/latest/compatibility/compatibil...
It is Nvidia that has the track record of closed drivers and insisting on doing all software dev without community improvements to expected results.
The defacto GPU compute platform? With the best featureset?
Also pretty hard to beat a Strix Halo right now in TPS for the money and power consumption.
Even that aside there exist plenty like me that demand high freedom and transparency and will pay double for it if we have to.
The market doesn't care about any of that. The consumer market doesn't care, and the commercial market definitely does not. The consumer market wants the most Fortnite frames per second per dollar. The commercial market cares about how much compute they can do per watt, per slot.
> there exist plenty like me that demand high freedom and transparency and will pay double for it if we have to.
The four percent share of the datacenter market and five percent of the desktop GPU market say (very strongly) otherwise.
I have a 100% AMD system in front of me so I'm hardly an NVIDIA fanboy, but you thinking you represent the market is pretty nuts.
I think local power efficient LLMs are going to make those datacenter numbers less relevant in the long run.
LLM’s run great on it, it’s happily running gemma4 31b at the moment and I’m quite impressed. For the amount of VRAM you get it’s hard to beat, apart from the Intel cards maybe. But the driver support doesn’t seem to be that great there either.
Had some trouble with running comfyui, but it’s not my main use case, so I did not spent a lot of time figuring that out yet
May I ask, what kind of tok/s you are getting with the r9700? I assume you got it fully in vram?
The model that is currently loaded full time for all workloads on this machine is Unsloth's Q3_K_M quant of Qwen 3.5 122b, which has 10b active parameters. With almost no context usage it will generate 59 tok/sec. At 10,000 input tokens it will prefill at about 1500 tok/sec and generate at 51 tok/sec. At 110,000 input tokens it will prefill at about 950 tok/sec and generate at 30 tok/sec.
Smaller MoE models with 3b active will push 70 tok/sec at 10,000 context. Dense models like Qwen 3.5 27b and Devstral Small 2 at 24b will only generate at around 13 - 15 tok/sec with 10,000 context.
This is all on llama.cpp with the Vulkan backend. I didn't get to far in testing / using anything that requires ROCm because there is an outstanding ROCm bug where the GPU clock stays at 100% (and drawing like 60 watts) even when the model is not processing anything. The issue is now closed but multiple commenters indicate it is still a problem. Using the Vulkan backend my per-card idle draw is between 1 and 2 watts with the display outputs shut down and no kernel frame buffer.
$uname -r
6.8.0-107-generic
$ollama --version
ollama version is 0.20.2
$ollama run "gemma4:31b" --verbose "write fizzbuzz in python."
[...]
total duration: 45.141599637s
load duration: 143.633498ms
prompt eval count: 21 token(s)
prompt eval duration: 48.047609ms
prompt eval rate: 437.07 tokens/s
eval count: 1057 token(s)
eval duration: 44.676612241s
eval rate: 23.66 tokens/sEdit: I misread the "2x r9700" as "2 rx9700" which differs from the topic of this comment (about RNDA4 consumer SKUs). I'll keep my comment up, but anyone looking to get Radeon PRO cards can (should?) disregard.
I really wish AMD and Intel boards get replaced by competent people. They could do it in very short time. Both have integrated GPUs with main memory. AMD and Intel have (or at least used to have) serious know-how in data buses and interconnects, respectively. But I don't see any of that happening.
ROCm? It can't even support decent Attention. It lacks a lot of features and NVIDIA is adding more each year. Soon they will reach escape velocity and nobody will catch them for a decade. smh
It's pretty insane how overpriced NVIDIA hardware is.
Running games on my loaded M4 Max is worse than on my 3090 despite the over-four-year generational gap.
Like, Pacific Drive will reach maybe 30fps at less than 1080p whereas the 3090 will run it better even in 4K.
That could just be CrossOver's issue with Unreal Engine games, but "just play different games" is not a solution I like.
Intel? Agreed. But AMD is making money hand over fist with enterprise AI stuff.
Right now, any effort that AMD or NVIDIA expend on the consumer sector is a waste of money that they could be spending making 10x more at the enterprise level on AI.
From all the existing examples, it really looks the most interesting.
I.e. what I'm surprised about is lack of backing for it from someone like AMD. It doesn't have to immediately replace ROCm, but AMD would benefit from it advancing and replacing the likes of CUDA.
We've started a company around Rust on the GPU btw (https://www.vectorware.com/), both CUDA and Vulkan (and ROCm eventually I guess?).
Note that most platform developers in the GPU space are C++ folks (lots of LLVM!) and there isn't as much demand from customers for Rust on the GPU vs something like Python or Typescript. So Rust naturally gets less attention and is lower on the list...for now.
> Note: This project is still heavily in development and is at an early stage.
> Compiling and running simple shaders works, and a significant portion of the core library also compiles.
> However, many things aren't implemented yet. That means that while being technically usable, this project is not yet production-ready.
Also projects like rust gpu are built on top of projects like cuda and ROCm they aren’t alternatives they are abstractions overtop
What I meant more is the language of writing GPU programs themselves, not necessarily the machinery right below it. Vulkan is good to advance for that.
I.e. CUDA and ROCm focus on C++ dialect as GPU language. Rust GPU does that with Rust and also relies on Vulkan without tying it to any specific GPU type.
You could argue about CPU architectures the same, no? Yet compilers solve this pretty well most of the time.
They just don't care enough to compete.