Edit: for me this post appears on the front page of HN. OP this is mission success - add this project to your résumé and stop spamming.
I've considered building this exact thing before (I think I've talked about it on HN even), but the reason I didn't build it was because I was sure (on an intuitive level) the actual overhead of the SQL layer was negligible for simple k/v queries.
Where does the +104% on random deletes (for example) actually come from?
The SQL layer overhead alone is probably small, you're right. The bigger gain comes from a cached read cursor. SQLite opens and closes a cursor on every operation. SNKV keeps one persistent cursor per column family sitting open on the B-tree. On random deletes that means seek + delete on an already warm cursor vs. initialize cursor + seek + delete + close on every call.
For deletes there's also prepared statement overhead in SQLite — even with prepare/bind/step/reset, that's extra work SNKV just doesn't do.
I'd genuinely like someone else to run the numbers. Benchmark source is in the repo if you want to poke at it — tests/test_benchmark.c on the SNKV side and https://github.com/hash-anu/sqllite-benchmark-kv for SQLite. If your results differ I want to know.
All random accesses are slower.
that credulous hn readers will upvote this is alarming.
To trust something like data-storage to vibe-coded nonsense is incredibly irresponsible. To promote it is even moreso. I'm just surprised you can't tell, too.