Try this on any game console or drm protected device ans you are DMCAed before you know it.
https://www.usenix.org/system/files/conference/usenixsecurit...
https://informatik.rub.de/veroeffentlichungenbkp/syssec/vero...
Even without supposed goodwill of AMD and seeing things a different way being a) affiliated with a university b) outside the USA may have changed some of the equation.
What is your theory of harm? Who is harmed and how? Why should the law protect them by restricting the freedom of others?
AMD *sold* these CPUs to customers potentially running this tool on their hardware. That makes you think AMD should be entitled to restrict what the public is allowed to know about their products or does with them post sale?
Also if AMD is still in control shouldn't they be liable too? Should users get to sue AMD if an AMD CPU got compromised by malware e.g. the next side channel attack?
I might start to feel some sympathy for AMD and Intel if they voluntary paid all their customers for the effective post-sale performance downgrades inflicted on customers by mitigations required to make their CPUs fit for purpose.
If more about morals, generally publishing vulnerability research tooling is business as usual for white hat vulnerability researchers, working at bigcorps or not, and has a long history. seems surprising to see this kind of "not cool" comment on this site.
Certain Japanese video-game companies would take issue with that interpretation of facts. Of course there is the arbitrary distinction between 'access' and 'copy' control mechanisms. Something arguably made irrelevant by the further integration of general concepts from personal-computing into certain video-game systems.
‘In the courts, you will be deemed either innocent or guilty, according to your wealth or poverty.’
Same with apps, aka everything is opensource if you know RE ;-)
https://github.com/github/dmca/blob/master/2025/02/2025-02-2...
So it probably needs to be installed at every system boot.
Perhaps someone more knowledgeable can correct my guesses?
Would it be useful to have a public list of all example keys that could be accidentally used, which could be CI/CD tested on all publicly released firmware and microcode updates?
If there was a public test suite, Linux fwupd and Windows Update could use it for binary screening before new firmware updates are accepted for distribution to endpoints.
Using CMAC as both the RSA hashing function and the secure boot key verification function is almost the bigger WTF from AMD, though. That’s arguably more of a design failure from the start than something to be caught.
Plus it would only help with that one issue, not with the millions of other ways things can go wrong. Vendors publishing their security architecture so others can convince themselves that it is in fact secure would be better, it is how TLS or WPA get enough eyeballs.
Is it using different keys, but same scheme (and could possibly be broken via side-channels as noted in the article)? Or perhaps AMD notices something and changed up the microcode? Some clarification on that part would be nice.
[1] https://github.com/google/security-research/security/advisor...
It's interesting to think about the sorts of things we could do if we had low level control over our hardware. Unfortunately things seem consistently headed in the opposite direction.
CPU vendors typically gain performance when adding new instructions because they add new fancy uops. For example, x86 has AES instructions which lead to uops which (I imagine) exercise some hardware AES block. Vendors are not simply implementing AES in pure ucode as this wouldn't really gain any performance advantage over doing AES directly in software.
CPUs have no non-volatile memory -- microcode fully resets when the power is cycled. So, in a sensible world, the impact of this bug would be limited to people temporarily compromising systems on which they already had CPL0 (kernel) access. This would break (possibly very severely and maybe even unpatchably) SEV, and maybe it would break TPM-based security if it persisted across a soft reboot, but would not do much else of consequence.
But we do not live in a sensible world. The entire UEFI and Secure Boot ecosystem is a complete dumpster fire in which the CPU, via mechanisms that are so baroque that they should have been disposed of in, well, the baroque era, enforces its own firmware security instead of delegating to an independent coprocessor. So the actual impact is that getting CPL0 access to an unpatched system [0] will allow a complete compromise of the system flash, which will almost certainly allow a permanent, irreversible compromise of that system, including persistent installation of malicious microcode that will pretend to be patched. Maybe a really nice Verified Boot (or whatever AMD calls its version) implementation would make this harder. Maybe not.
(Okay, it's not irreversible if someone physically rewrites the flash using external hardware. Good luck.)
[0] For this purpose, "unpatched" means running un-fixed microcode at the time at which CPL0 access is gained.
That depends on how we define "independent" - AMD's firmware validation is carried out by the Platform Security Processor, which is an on-die ARM core that boots its firmware before the x86 cores come up. I don't know whether or not the microcode region of the firmware is included in the region verified by their Platform Secure Boot or not - skipping it on the basis that the CPU's going to verify it before loading it anyway seems like an "obvious" optimisation, but there's room to implement this in the way you want.
But raw write access to the flash depends on you being in SMM, and I don't know to what extent microcode can patch what SMM transitions look like. Wouldn't bet against it (and honestly would be kind of surprised if this was somehow protected), but I don't think what Google's worked out here yet gives us a solid answer.
As for my guesses about the rest:
As far as I know (and I am not privy to any non-public info here), the Intel ucode patch process sure seems like it can reprogram things other than the ucode patch SRAM. There seem to be some indications that AMD’s is different.
I wouldn’t bet real money, with fairly strong odds, that this ucode compromise gives the ability to run effectively arbitrary code in SMM CPL0, without even a whole lot of difficulty other than reverse engineering enough of the CPU to understand what the uops do and which patch slots do what. I would also bet, at somewhat less aggressive odds, that ucode patches can do things that even SMM can’t, e.g. writing to locked MSRs and even issuing special extra-privileged operations like the “Debug Read” and “Debug Write” operations that Intel CPUs support in the “Red Unlock” state.
Look at tests/stop.sh and check the different segments (ls:, ms:, etc you can also address them like 0:[..], 1, 2, 3,... 15:[...]). One of those is probably flash. If you know how that looks like try to dump it first with a load and then check which segment and which address it is at and then write back to it.
I think that, at least on Intel, the “microcode” package includes all kinds of stuff beyond just the actual CPU microcode, and I think it’s all signed together. If AMD is like this, than an unpatched CPU can be made to load all kinds of goodies.
Also, at least in Intel (and I think also on AMD), most of the SPI flash security mechanism is controlled by SMM code. So any ranges that the CPU can write, unless locked by a mechanism outside of the control of whatever this bug compromises, can be written. This seems pretty likely to include the entire SPI chip, which includes parts controlling code that will run early after the next power cycle, which can compromise the system again.
https://github.com/amd/AMD-ASPFW/blob/3ca6650dd35d878b3fcbe5...
However, maybe, there is a way. Back when we were researching microcode we found a talk [1] that ran multiple ISAs in parallel on the same processor using microcode. We never figured out how this worked, our best guess is either swapping microcode from RAM as needed or branching to an emulator in x86 code. If this was a K10 cpu, which might be a bit old at the time of the talk, then there is no way you could fit an ARM interpreter into the update. You had, iirc, 32 triads of 3 operations each. Maybe, just maybe, you could fit a bytecode interpreter that then executes the actual ISA emulator. However you would need to hook every instruction, or at least trap on each instruction fetch and hook the appropriate handling routine and both sound very complicated.
If your infinite resources include manufacturing new silicon with the proper fast path and microcode decoder, then yes, but note that x86 and ARM have different memory models. Also at that point you just have a very expensive, very inefficient ARM processor.
[1] https://troopers.de/events/troopers16/655_the_chimaera_proce...
"The first question everyone has about microcode updates is something like "So can I execute ARM64 code natively on my Athlon?" It's a fun idea, but now we know that a microcode patch doesn't work like that -- so the answer is no, sorry!"
> We plan to provide additional details in the upcoming months on how we reverse engineered the microcode update process, which led to us identifying the validation algorithms
How does that work? Did someone figure out AMD's private keys?
> Here's the thing - the big vendors encrypt and sign their updates so that you cannot run your own microcode. A big discovery recently means that the authentication scheme is a lot weaker than intended, and you can now effectively "jailbreak" your CPU!
But there's no further details. I'd love to know about the specifics too!
I guess this one just slipped through the cracks?
They didn’t need a keyed hash at all, they needed a collision resistant hash.
SHA256 would have eliminated this vuln and it has a hardcoded “key” built into it.
Using a secret key for CMAC would not have been more secure, it would have just meant sophisticated hardware extraction of the key was required before this attack could be mounted.