5 pointsby kacy5 hours ago1 comment
  • kacy5 hours ago
    Hi folks,

    I'm anxious & excited to share Ember: a low-latency, memory-efficient distributed cache written in Rust. I designed it as a drop-in replacement for Redis, but with some modern twists that I've been iterating on and have been on my wishlist.

    * https://github.com/kacy/ember

    * https://emberdb.com

    Here's what makes Ember different IMO:

    * Native Protobuf storage (so no string serialization overhead and can encode/decode in the DB)

    * Full gRPC API alongside RESP3 (think bi-directional streaming support for pub/sub)

    * Mature distributed systems foundations (Raft consensus, gossip failure detection, automatic failover, live resharding)

    * Built-in vector storage with a much smaller footprint

    * Single static binary, zero runtime deps, and no garbage collector pauses

    * It's also a true drop-in replacement. No need to use ember-cli or any of the client libraries. Your code should hopefully "just work" without too many issues.

    This whole thing started purely as a learning project. For the last 15 years or so, I've worked with Redis, distributed systems, and all the fun and painful parts of building reliable infrastructure. But I had never actually sat down and /implemented/ those concepts from scratch myself. I wanted to really understand them. Not just read papers or use someone else's code.

    So I started small. `cargo new test-db` and a single main.rs file. "How does RESP3 parsing work?" and "Can I do a basic SET and GET command?" It was really messy, but it slowly grew. At some point, I realized this had turned into something very solid and performant. The benchmarks early on were very convincing, but that's because it was architecturally very simple in the beginning. Now there's full Raft for replication, shared-nothing sharding by core, gossip membership, and live slot migration, and it still punches above its weight. All of the benchmark stuff is secondary though. I'm just proud that it works reliably and that I learned so much along the way.

    A huge thank you to the Redis community and especially antirez (Savatore Sanfilippo). I don't know if you'll see this, but you've been such an inspiration. Redis changed how the industry thinks about caching, and its elegance and simplicity have been a source of inspiration, long before this project. Hundreds of thousands of developers have built incredible things on it.

    One side comment I want to mention: AI has been an absolute game-changer in getting this project where it is today. Models like Opus 4.5 and Gemini 3 helped me reason through tricky Raft edge cases, profile expensive calls, debug obscure concurrency bugs, generate > 1200 tests, and generally helped describe very hard to understand concepts in plain English. It let me learn faster, iterate quicker, and ship something I'm proud of in a fraction of the time it would have taken. Grateful for how these tools are democratizing deep systems work like this.

    It still feels immature in some respects, but the core is stable. All of my benchmarks are shared with the GCP VM config so other folks can replicate. Like all projects, there will be bugs, but I'll be here for a while hacking on it to make it better. Please take a look and give it a shot with your workloads. Thanks for any feedback!