29 pointsby aciccarelli25 hours ago5 comments
  • aciccarelli25 hours ago
    I hit this while pasting ~8K of DOM markup into Claude Code and iterating on selectors for a Chrome extension. About 40 minutes in, compaction fired and the summary said "user provided DOM markup" but the actual content was gone. Claude started guessing at selectors it had seen 20 minutes earlier.

    The transcript with the original markup is still sitting at ~/.claude/projects/ as a .jsonl file — the compaction summary just has no pointer back to it.

    I found 8+ open issues on the repo describing different symptoms of this same root cause. The proposal is to add line-range annotations to compaction summaries so Claude can surgically recover just the chunk it needs from the transcript on demand. Zero standing token overhead.

    Curious if others have hit this in different scenarios or found workarounds that actually stick.

    • maxbond3 hours ago
      May I suggest pasting those 8k lines into a file instead of into the prompt?
    • martinald4 hours ago
      I'm fairly sure that Claude adds a note where it can find the original transcript after compactation (somewhat recently).

      Fwiw I built a little CLI that could help with this, https://github.com/martinalderson/claude-log-cli. It allows Claude to search its own logs very efficiently. So I'm sure you could add something like "if the session is continued from a previous one, use claude-log cli to find users original prompt with claude-log" which would pull it out very efficiently. I built it to enable self improving claude.md files (link to the blog in the GitHub) but it's so useful for many tasks.

      • antinomicus4 hours ago
        My Claude code started doing the compaction summary link thing but then stopped pretty soon after.
        • martinald3 hours ago
          Yes I think the same here tbh, hard to keep up with.
    • selridge3 hours ago
      Don't treat chat as anything other than ephemeral. That's the only workaround.
  • selridge3 hours ago
    Don't treat chat as a contracted space. It's inhalation and exhaust.

    Store things you care about on disk!

  • CjHuber3 hours ago
    I honestly still don't see the point of compaction. I mean it would be great if it did work, but I do my best do minimize any potential for hallucination and a lossy summary is the most counterproductive thing for that.

    If you have it write down every important information and finding along a plan that it keeps updated, why would you even want compaction and not just start a blank sessions by reading that md?

    I'm kind of suprised that anyone even thinks that compaction is currently in any way useful at all. I'm working on something which tries to achieve lossless compaction but that is incredibly expensive and the process needs around 5 to 10 times as many tokens to compact as the conversation it is compacting.

    • martinald3 hours ago
      Well a few things.

      Firstly, it's very useful to have your (or at least some) previous messages in. There's often a lot of nuance it can pick up. This is probably the main benefit - there's often tiny tidbits in your prompts that don't get written to plans.

      Secondly, it can keep eg long running background bash commands "going" and know what they are. This is very useful when diagnosing problems with a lot of tedious log prepping/debugging (no real reason these couldn't be moved to a new session tho).

      I think with better models they are much better at joining the dots after compactation. I'd agree with you a few months ago that compactation is nearly always useless but lately I've actually found it pretty good (I'm sure harness changes have helped as well).

      Obviously if you have a total fresh task to do then start a new session. But I do find it helpful to use on a task that is just about finished but ran out of space, OR it's preferable to a new task if you've got some hellish bug to find and it requires a bunch of detective work.

      • CjHuber3 hours ago
        I mean I agree the last couple of messages in a rolling window are good to include, but that is not really most of what happens in compaction, right?

        > there's often tiny tidbits in your prompts that don't get written to plans.

        Then the prompt of what should be written down is not good enough, I don't see any way how those tidbits would survive any compaction attempts if the llm won't even write them down when prompted.

        >Secondly, it can keep eg long running background bash commands "going" and know what they are. This is very useful when diagnosing problems with a lot of tedious log prepping/debugging (no real reason these couldn't be moved to a new session tho).

        I cannot really say anything about that, because I never had the issue of having to debug background commands that exhaust the context window when started in a fresh one.

        I agree they are better now, probably because they have been trained on continuing after compaction, but still I wonder if I'm the only one who does not like compaction at all. Its just so much easier for an LLM to hallucinate stuff when it does have some lossy information instead of no information at all

    • peacebeard3 hours ago
      Works fine for me in sessions that use a lot of context. My workflow is to keep an eye on the % that shows how soon it will auto compact. And either /clear and start over, or manually compact at a convenient place where I know it'll be effective.
      • grimgrin2 hours ago
        i use https://github.com/sirmalloc/ccstatusline and when im around 100k tokens im already thinking about summarizing where we're at in the work so i can start fresh with it

        it is pretty rare for me to compact, even if i let it run to 160k

        --

        just realized how i wouldn't think about using ccstatusline based a quick glance at its README's images. looks like this for me:

        https://i.imgur.com/wykNldY.png

  • frk_ai_8b2e4 hours ago
    [flagged]
  • behnamoh3 hours ago
    [flagged]
    • SatvikBeri3 hours ago
      It's interesting that you find compaction trivial. I think it's one of the most important tasks, to the point where I use Amp these days because its "handoff" feature is so much nicer than CC's compaction.
      • handfuloflight2 hours ago
        What's nicer about their handoff feature?
        • SatvikBerian hour ago
          It's a lot of little things + polish more than any big change.

          Amp has a first-class concept of threads, a tree of sessions. This is really nice for long work on related features, it tracks which threads are spawned from others. When you type /handoff, it asks you for a goal for the new thread, then summarizes your existing context with respect to that goal, and opens a new thread with just that context.

          This makes it really easy and pleasant to spin up new sessions to do relatively focused tasks, which keeps your context usage low and the model smarter. It also enables some really nice use cases like opening up an old thread where you built a feature two weeks ago, then spinning up a new one to do a modification.

          You can do all of this with Claude Code but it's just clunkier and in my experience hasn't worked nearly as well, e.g. I find the compactions tend to be full of a lot of useless stuff, or miss something important compared to the handoffs.

          • behnamoh34 minutes ago
            is it similar to the concept of "branch off from here" feature in some chat UIs where you can continue one convo in different directions? but does amp keep each thread in a separate worktree/isolated env and let you choose which one to merge?
    • g-mork3 hours ago
      how do you make CC talk via a proxy? I had a few googles for this and got nowhere
      • behnamoh2 hours ago
        set Anthropic base URL in CC to your proxy server and map each model to your preferred models (I keep opus↔opus but technically you can do opus↔gpt-5.3, etc.). then check the incoming messages for the string that triggers compaction (it's a system prompt btw) and modify that message before it hits the LLM server.
      • tyre2 hours ago
        have you tried asking CC to build something that does it? I'm guessing it could.