1 pointby joeljoseph_3 hours ago1 comment
  • joeljoseph_3 hours ago
    [Experimental Project]

    I use both OpenAI Codex and Claude Code daily on the same codebase. The biggest pain — they don't share memory. Claude fixes a bug, Codex repeats it next session. Every session starts from zero.

    So I built Open Timeline Engine — a local-first engine that captures your decisions, mines patterns, and gives any MCP agent shared memory.

    It also support gated autonomy

    • pavel_man2 hours ago
      Interesting approach. How do you handle long-term drift or incorrect pattern capture?

      If the system mines behavioral patterns from decisions, I imagine there’s a risk of reinforcing mistakes over time. Do you have a mechanism for pruning, versioning, or validating learned memory before it propagates across agents?

      • joeljoseph_an hour ago
        yeah that's a real concern. Few things I do:

        Patterns aren't blindly trusted — they carry confidence scores and need real evidence before they go active. Low-confidence stuff never drives autonomous execution.

        If something wrong gets learned, you can deprecate it, reject it, or hard-delete it. There's also explicit "avoid" rules you can set.

        Old patterns naturally fade — retrieval is recency-weighted, so stuff that isn't reinforced drops in rank over time. There's also a lifecycle cleanup that prunes stale records.

        And even with all that, safety gates still apply. The system won't act on weak evidence — below a confidence threshold it just asks you instead of guessing.

        so drift is real, but it's bounded by decay + pruning + manual overrides. If you keep making the same mistake though, yeah, it'll learn that too. That's the honest tradeoff.