Real numbers from my machine. Direct node lookup: 19us. Prefix queries over 10k nodes: 28-80us with zero embedding model. 280x faster than local vector DB at 10k nodes. Full agent context rebuilt from cold start in under 1ms. ACID durable via WAL tested across 60 crash scenarios with zero data loss. Validated on Jetson Orin Nano at 192ns hot reads.
The core idea is that most agent memory is structured not fuzzy. User preferences, learned facts, task stores, conversation history. You know what you're looking for. Prefix-semantic naming replaces vector similarity entirely for these workloads. No embedding model. No GPU. No cloud call.
The robotics use case is what I find most interesting. A robot learns its environment during operation. Which door sticks, which patient has a latex allergy, which corridor is slippery. Power cuts out. Robot reboots cold. Every memory restores in milliseconds via WAL recovery. No internet required. Works in a Faraday cage, underground, on a factory floor.
It is not a vector DB replacement. For fuzzy similarity search over unstructured documents Qdrant and Chroma are the right tools. Synrix is the memory layer for structured agent workloads where you control the naming.
Curious whether anyone has hit the structured vs fuzzy memory problem in production and how you solved it.