1 pointby STARGA4 hours ago1 comment
  • STARGA4 hours ago
    I built mind-mem because every agent memory system I evaluated (Mem0, Letta, Memobase) requires a vector DB, embedding API, or 50+ pip dependencies. For local-first agent workflows, that's a non-starter.

    What it does: - Hybrid retrieval: BM25 (Porter stemming, query expansion, field boosts) + vector search (bge-large-en-v1.5, sqlite-vec) + Reciprocal Rank Fusion - Adaptive block metadata (A-MEM): blocks learn from retrieval patterns — frequently accessed blocks grow keyword sets and get importance boosts - Intent-aware routing: "when did we deploy v2?" gets temporal weighting, "how does auth work?" gets graph traversal — 9 intent types with adaptive feedback loop - Cross-encoder reranking (config-gated, optional) - MCP server with 19 tools — works with Claude, Cursor, VS Code, any MCP client - 17 compiled scoring kernels for hot-path scoring

    Zero external deps for the core path. Python stdlib + SQLite. Vector search available via fastembed (CPU/ONNX, no GPU required).

    Benchmark: 67.3% on LoCoMo (1,986 questions) — that's 98% of Mem0's 68.5% score, with zero infrastructure.

    2,027 tests. v1.8.1. PyPI: pip install mind-mem.

    Happy to answer questions about the BM25+vector fusion, the A-MEM pattern, or why hybrid retrieval beats pure embeddings on structured agent data.