One thing I keep seeing in practice is that “memory” problems are often less about storage and more about structure + retrieval strategy.
Vector search helps sometimes, but for a lot of agent workflows we’ve had better results with explicit context organization (files, metadata, rules) rather than semantic similarity alone.
Curious how you’re thinking about memory updates over time — append-only vs rewriting summaries?
In Mneme, updates are intentionally asymmetric: – Facts are append-only and explicitly curated (they’re meant to be boring and stable). – Task state is rewritten as work progresses. – Context is disposable and aggressively compacted or dropped.
The idea is that only a small subset of information deserves long-term durability; everything else should be easy to overwrite or forget.
This reduces the need for heavy retrieval logic in the first place, since the model is usually operating over a much smaller, more explicit working set.
Instead of retrieval or embeddings, it treats memory as an explicit, structured artifact and separates: – stable facts – task state – ephemeral context
The goal is to make memory boring, inspectable, and durable across sessions.
Happy to answer questions or hear why this is a bad idea