Also one of my favourite kernel patch messages: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/lin...
As you might've guessed, it lacked November, but no one noticed for 4+ months, and I've left the company since. It created a local meme #nolognovember and even got to the public (it was in Russia: https://pikabu.ru/story/no_log_november_10441606)
That hardware real time clocks keep time in date and time drives me batty. And no one does the right thing which is just a 64 bit counter counting 32khz ticks. Then use canned tested code to convert that to butt scratching monkey time.
Story my old boss designed an STD Bus RTC card in 1978 or something. Kept time in YY:MM:DD HH:MM:SS 1/60 sec. And was battery backed. With shadow registers that latched the time. Couple of years later redesigned it as a 32 bit seconds counter. With a 32khz sub seconds counter. Plus a 48 bit offset register. What was a whole card was now a couple of 4000 series IC's on the processor card. He wrote 400 bytes of Z80 assembly to convert that to date and time. He said was tricky to get right but once done was done.
On a QWERTY keyboard, the O key is also next to the I key. It's also possible someone accidentally fat-fingered "GenuineIontel" , noticed something was off, and moved their cursor between the "o" and "n", and accidentally hit Delete instead of Backspace.
Maybe an unlikely set of circumstances, but I imagine a random bit flip caused at the hardware-level is rare since it might cause other problems, if something more important was bit-flipped.
That is not the workaround in the documentation that was just linked.
Workarounds:
The solution to this problem is to put two instructions that do not require write back data after the mul instruction.
This seems reasonable for your compiler vendor to implement without getting rid of multiplication altogether."GenuineIotel" is definitely odd, but difficult to research more about; I suspect these CPUs might actually end up being collector's items sometime in the future.
because inserting no-op instructions after them prevents the issue.
The early 386s were extremely buggy and needed the same workaround: https://devblogs.microsoft.com/oldnewthing/20110112-00/?p=11...
> For example, there was one bug that manifested itself in incorrect instruction decoding if a conditional branch instruction had just the right sequence of taken/not-taken history, and the branch instruction was followed immediately by a selector load, and one of the first two instructions at the destination of the branch was itself a jump, call, or return.
Even if you write up a comprehensive test plan for the branch predictor, and for selector loads, and so on, it might easily not include that particular corner case. And pre silicon testing is expensive and slow, which also limits how much of it you can do.
Though the bugs we were looking to catch there were definitely not the multiple-interacting-subsystems type, and more just the "corner cases in input data values in floating point instructions" variety.
I disagree. Misspelling a name in the CPUID is kind of easy to do, somewhat awkward to test (in a non-tautological way), and pretty easy to work around.
Having `mul ...; lw ...;` fail show that they've done very little testing of the chip. Any basic randomised pipeline testing would hit that trivial case.
Essentially all CPUs are pipelined today. In-order pipelined CPU execution semantics are not particularly hard to test. Even some open source testing systems could detect this bug, e.g. TestRig or RISCV-DV.
Microcode errata re-writes to GPR, compiling low level "mul," and "output," CPU RISC V to system archictecture.