The tradeoff we found: graph-based memory is more queryable but adds architectural complexity that breaks when the runtime crashes or the agent needs to be inspected by a human. Flat files are readable, git-diffable, and survive catastrophic failures better.
The loop you describe (reconstruct > reason > decide > execute > record) matches almost exactly what we landed on. The part that's still unsolved for us is "update memory" - specifically, who decides what's worth keeping long-term vs discarding. Right now it's the agent's judgment call, which works until it isn't.
Curious what your Memory Fusion Engine does differently at the curation layer - is it content-based similarity, recency weighting, or something else?
That’s honestly where it stopped being retrieval and started feeling more like actual memory. So far it hasn’t really left me hanging in any work scenarios, even in chaos or multi-process situations, because it’s constantly updating continuity and direction as things change.
Then it kind of just naturally solved a way to handling decay / consolidation once the history gets long. At one point I several thousand revolving memories but I haven't found any weaknesses yet.