3 pointsby wkyleg5 hours ago4 comments
  • AndyNemmity4 hours ago
    I don't think there are reasonable metrics.

    I have a custom learning system. We are all trying things, that's where ai development is.

    None of us know the best solution. We are all exploring in paths. I don't find memory and persistent long term context to be an issue for me, but I am using a full custom ai claude code setup, so perhaps I have sorted it for myself. Unsure.

    Can you give a specific example? Like, talk through your workflow so I can understand it better?

  • kageroumado4 hours ago
    For my personal use case, I use something that's known as “lossless context management.” I made a custom harness implementation that uses it. In short, it has a database with every message ever exchanged, and the model can access any of those messages using a simple search. On top of that, every exchange is summarized and stored separately as a level zero summary. Level zero summaries are then periodically summarized together into level one summaries that leave only the most important parts (lessons, knowledge).

    The full context then looks something like: [intro prompt] + [old exhanges lvl 1 summaries] + [larger system prompt] + [more recent exchanges lvl 0 summaries] + [temporal context] + [recent messages with tool results stripped] + [recent messages including tool results]

    Tool results are progressively stripped because they are generally only useful for a few turns. This allows to keep everything we've ever done in the context, and the model can easily look up more information by expanding each node. It's a single perpetual session that never compacts during active work.

    I find it outperforming every other solution I tried for my use case (personal assistant).

  • justboy19875 hours ago
    [dead]
  • sneg555 hours ago
    [dead]