AI memory seems predominately a tension between compression and lookup speed.
These large vectors are keys for lookups, a language of expression, and a means of compression. Learning new things is always easier when you can map it back to something you already know. There's this page about forest fire simulations in a scientific computing book I read back in college more than a decade ago. I remember it viscerally because I've solved 100 different problems with that as its seed. I can barely remember anything else in the book. I don't remember it because I read it over and over, I remember it because it was useful and kept being useful.
If some new technique or idea is 90% similar to something I already know, I'll learn it easily. If it's 60%, I need to churn it around, put in a lot of learning effort. If it's 0%, it's noise from this angle.
> 4. Establishes meaningful links based on similarities > 5. Enables dynamic memory evolution and updates
Wondering how much compression occurs in #5.
Further still, it would be neat to see a hybrid system where humans and agents collaborate on building and maintaining a knowledgebase.
I guess the biggest risk is that two related notes don’t end up getting connected, so the agent can get stuck in a local optimum. I guess once a certain total number of notes has been reached, it becomes quasi impossible to make all the connections, because there are just too many possibilities?
Also curious if there might be some improvements if you dont rely on semantic similarity and just do all the pairwise "how related are these memories and in what way" LLM test like https://www.superagent.sh/blog/reag-reasoning-augmented-gene....
In other words, if this AgenticMemory can give structure to unstructured conversations, and if this makes conversational feedback more useful for the model to learn, then can we use it to continually refine the model to be better at our particular use case?