Not sure how this works for larger data structures, but my first thought was that this should be implemented as some microcode or instruction.
Most computation is not thaat jitter sensitive, perception is not really in the nano to microsecond scale, but maybe a cool gadget for like dtrace or interrupt handers etc.
* Video [2]
1. https://x.com/lauriewired/status/2041566601426956391 (https://xcancel.com/lauriewired/status/2041566601426956391)
From a narrative standpoint, I agree it makes more sense to focus on a duplicated lookup table and fastest wins, however, from an engineering standpoint, framing it in terms of channel de-correlated reads has more possibilities. For example, if you need to evaluate multiple parallel ML models to get a result then by intentionally partitioning your models by channel you could ensure that a model does reads on only fast data or only slow data. ML models might not be that interesting since they are good candidates for being resident in L3.
But otherwise, nice work tying all the concepts together. You might want to get some better model trains though.
But practically speaking, in a real application - isn’t any performance benefit going to be lost by the reduced cache hit rate caused by having a larger working set? Or are the reads of all-but-one of the replicas non-cached?
Apologies if I am missing something.
Additionally you are going to be memory starving every other thread/process because you are hogging all the memory channels, and making an already bad L3 cache situation worse.
Outside of extremely niche realtime use cases (which would generally fit in L3 cache) I can’t see how this would improve overall throughput, once you take into account other processes running on the same box.
Do you have an example use case?
The one that comes to mind is HPC, where you avoid over allocation of the physical cores. If the process has the whole node for itself for a brief period, inefficient memory access might have a bigger impact than memory starvation.
IBM also has their RAID-like memory for mainframes that might be able to do something similar. This feels like software implemented RAID-1.
OT: Tail Slayer. Not Tails Layer. My brain took longer to parse that than I’d have wanted.
Also, having sacrificed my own mental health to watch the disgustingly self-promoting hour-long video that announces this small git commit, I can confidently say that "Graviton doesn't have any performance counters" is one of the wrongest things I've heard in a long time.
Overall, I give it an F.
Anyway if you want to hide memory refresh latency, IBM zEnterprise is your platform. It completely hides refresh latency by steering loads to the non-refreshing bank, and it only costs half the space, not up to 92% of your space like this technique.
No you did not.
So i give your comment a F on all fronts.
The clflush is there because the technique targets data that will miss the cache anyway. If your working set fits in L1, you don’t need this.
Also, AWS Graviton instances absolutely do not expose per-channel memory controller counter PMUs. That’s why you have to use timing-based channel discovery.
The IBM z-system is neat! But my technique will work on commodity hardware in userspace, and you can easily only sacrifice half the space if you accept 2-way instead of 8+ way hedging. It’s entirely up to you how many channel copies you want to use.
Your reply was quite rude, but I hope this is informative.
Being competent requires being knowledgeable AND getting things done. You might be knowledgeable, but you need to learn how to work with other people.