1 pointby sazo13 hours ago1 comment
  • sazo13 hours ago
    Every LLM session starts from zero. You re-explain your stack,

    your context, your decisions — every single time.

    This fixes that. Wraps Claude, GPT, or any local Ollama model

    with persistent memory that never forgets and gets smarter about

    what you need over time.

    Under the hood: SQLite episode log + LoRA reranker that learns

    your retrieval patterns + EWC weight consolidation that protects

    memories you keep coming back to. BYOK, runs local, your data

    never leaves your machine.

    This is not a RAG wrapper.

    This is Bubble.