2,000 bytes over commodity network in 5 us is 400 MB/s (3.2 Gbps). Far faster than commodity residential speeds.
1,000,000 bytes from memory in 741 ns (ridiculous amount of significant figures) is 1.3 TB/s. Modern DDR5 clocked at 4800 MHz is ~80 GB/s. Only a full stack of HBM3e (the memory they use on 10,000$ GPUs to saturate thousands of processing units) can achieve that.
1,000,000 bytes from SSD in 12.245 us (ridiculous amount of significant figures) is 81 GB/s. That is main memory speeds to a SSD. Actual NVMe SSDs are closer to 1-3 GB/s. Even PCIe 5.0 x16 has insufficient bandwidth to support that and most drives only support x2 or x4. For a x4 to support that, you would need a hypothetical PCIe 8.0 (expected to be finalized in 2028 with first implementations likely no earlier than 2030). And that is even ignoring the fact that the SSD itself is nowhere near being able to handle that level of bandwidth.
Disk seek in 1.649384 ms (ridiculous amount of significant figures) means we now have 36,000 RPM HDDs. Normal HDDs are 5400 or 7200 RPM resulting in 10 ms seek times.
Read 1,000,000 bytes from HDD in 358.968 us (ridiculous amount of significant figures) is 3 GB/s. That is NVMe SSD speeds on a SATA HDD. Actual HDDs are closer to 300 MB/s.
Every single one of these numbers is wildly incorrect with no basis in reality. This page is unequivocal anti-knowledge.
[1] https://news.ycombinator.com/item?id=47196505
[2] https://brenocon.com/dean_perf.html
[3] https://www.cs.cornell.edu/projects/ladis2009/talks/dean-key...
https://github.com/donnemartin/system-design-primer?tab=read...
http://ithare.com/infographics-operation-costs-in-cpu-clock-...
Shouldn't this be 5µs?
That said, all those numbers feel a bit off by 1.5-2 orders of magnitude -- that disk read speed translates to about 3 GB/s which is well outside the range of what HDDs can achieve.
https://brenocon.com/dean_perf.html indicates the original set of numbers were more like 10us, 250us, and 30ms.
And it links to https://github.com/colin-scott/interactive_latencies which seems like it extrapolates progress from 14 years ago:
// NIC bandwidth doubles every 2 years
// [source: http://ampcamp.berkeley.edu/wp-content/uploads/2012/06/Ion-stoica-amp-camp-21012-warehouse-scale-computing-intro-final.pdf]
// TODO: should really be a step function
// 1Gb/s = 125MB/s = 125*10^6 B/s in 2003
which means that in 2026 we'll have seen 11 doublings since gigabit speeds in 2003, so we'll all have > terabit speeds available to us.That’s PCIe 3.0 x4 or PCIe 4.0 x2, which a decent commodity M.2 NVMe SSD can use and can possibly saturate, at least for reads.
> which means that in 2026 we'll have seen 11 doublings since gigabit speeds in 2003, so we'll all have > terabit speeds available to us.
We’re not that far off. 100GbE hardware is not especially expensive these days. Little “AI” boxes with 400-800 Gbps of connectivity are a thing.
That being said, all the connections over 100Gbps are currently multi-lane AFAIK, and the heroic efforts and multiplexing needed to exceed 100Gbps at any distance are a bit in excess of the very simple technology that got us to 100Mbps “fast Ethernet”.
Given that there's a separate item for sequential disk reads vs SSD reads, I think it's pretty clear that particular item meant hard drives specifically. Agreed that modern SSDs should be able to pull that off.
> That being said, all the connections over 100Gbps are currently multi-lane AFAIK, and the heroic efforts and multiplexing needed to exceed 100Gbps at any distance are a bit in excess of the very simple technology that got us to 100Mbps “fast Ethernet”.
Yeah. Terabit networking is not here yet, and it's certainly not "commodity network"-grade. We can LACP a bunch of 100G optics together, but we're probably 5-10 years out for 800G ethernet to become widely adopted and for 1600G to even be developed.
So I guess it's a typo but it makes me doubt the other numbers.
> Productivity soars when a computer and its users interact at a pace (<400ms) that ensures that neither has to wait on the other.