Quick summary of the findings: - ChatGPT pre-computes lightweight summaries of your ~15 most recent conversations and injects them into every prompt. No vector search, no RAG. Simpler than I expected. - Claude takes an on-demand approach; it has search tools it can invoke to query your past conversations, but only fires them when it judges the context is relevant. More flexible, less consistent. - OpenClaw stores memory as plain Markdown on your local machine with hybrid search (semantic + BM25, 70/30 weighting). Fully transparent, but single-platform.
Full disclosure: I'm building in this space (Maximem Vity — a private, secure vault for cross-LLM memory). The comparison stands on its own, but that context motivated the research.
Happy to discuss the architectural differences or answer questions.