1 pointby dp-web43 hours ago1 comment
  • dp-web43 hours ago
    I built this because every Claude Code memory plugin I looked at (claude-mem, Total Recall, ContextForge) solves stateless sessions the same way: capture everything, search later. That's noisy and token-expensive.

      engram flips it — scores every tool use on 5 salience dimensions (Surprise, Novelty, Arousal, Reward, Conflict) at capture time. Routine operations evict from a circular buffer. Errors, breakthroughs, and novel work persist. Scoring  is heuristic TypeScript, <10ms, no LLM calls.
    
      The part I'm most interested in feedback on is the dream cycle. At session end (and mid-session on compaction), engram runs consolidation passes that extract recurring tool sequences, error→fix chains, and concept clusters. An optional "deep dream" sends observations to Claude and asks "what patterns are worth remembering?" — extracting semantic insights, not just mechanical sequences. Memories decay over time (0.05 confidence/day) and prune below 0.1. A memory system that only accumulates is a distortion engine.
    
      This is a spinoff from SAGE (https://github.com/dp-web4/SAGE), a cognition kernel for edge AI that uses the same SNARC salience scoring in its consciousness loop. The SNARC concept comes from Richard Aragon's Transformer Sidecar research.
    
      Technical details: 5 Claude Code hooks (SessionStart, UserPromptSubmit, PostToolUse, PostCompact, Stop), 4 MCP tools, per-directory SQLite isolation, epistemic labeling (observations tagged "observed" vs patterns tagged "inferred"). MIT license, zero external calls, all local.
    
      Been running it across a 6-machine fleet for a few days. Happy to answer questions about the salience scoring approach or the dream cycle architecture.