Few distributed filesystems/object stores seem to use Raft (or consensus at all) for replicating data because it's unnecessary overhead. Chain replication is one popular way for replicating data (which uses consensus to manage membership but the data path is outside of consensus).
What is the memory consumption under a significant load? That seems to be as much important as the throughput & latency.
Why people always lie with this? Especially in this case that they uploaded the entire log:
Date: Sat Dec 6 16:08:04 2025 +0100
Add hashing utilities and consistent hash ring
Date: Sat Dec 6 16:07:24 2025 +0100
Create mod.rs for common utilities in minikv
Date: Sat Dec 6 16:07:03 2025 +0100
Add configuration structures for minikv components
Date: Sat Dec 6 16:06:26 2025 +0100
Add error types and conversion methods for minikv
Date: Sat Dec 6 16:05:45 2025 +0100
Add main module for minikv key-value store
And this goes on until project is complete (which probably took 2~3h total if sum all sessions). Doubt learned anything at all. Well, other than that LLMs can solo complete simple projects.Comments in previous submission are also obviously AI generated. No wonder was flagged.
>Built in public as a learning-by-doing project
So, either the entire project was already written and being uploaded one file at the time (first modification since lowest commit mentioned is README update: https://github.com/whispem/minikv/commit/6fa48be1187f596dde8..., clearly AI generated and clearly AI used has codebase/architecture knowledge), and this claim is false, or they're implementing a new component every 30s.
https://github.com/whispem/minikv/commit/6e01d29365f345283ec...