54 pointsby jenic_4 days ago6 comments
  • marginalia_nu3 days ago
    This is methodologically flawed, as bytes only weakly correlate with tokens.

    Unless you're sending identical requests, you can't expect the same token counts for any given of bytes, or that a slightly longer (but different) message will lead to more tokens than a slightly shorter one, or vice versa.

    • Bolwin3 days ago
      > The numbers came from the same project and the same prompt across versions.

      I'm pretty sure the tester checked. If the request format is the same (which it is, given it uses the same as Anthropic's stable public API) and the same prompt/messages then bytes will correlate pretty well.

      • marginalia_nu2 days ago
        The prompt may be the same, but the project context would have have surely changed. User prompt itself is unlikely to be ~200KB.
  • a_c3 days ago
    I had the same suspicion so made this to examine where my tokens went.

    Claude code caches a big chunk of context (all messages of current session). While a lot of data is going through network, in ccaudit itself, 98% is context is from cache.

    Granted, to view the actual system prompt used by claude, one can only inspect network request. Otherwise best guess is token use in first exchange with Claude.

    https://github.com/kmcheung12/ccaudit

  • tencentshill3 days ago
    On the free plan, I hit the limit instantly by uploading one 45kb PDF and one prompt. Even for a free plan, I expect a bit more. Oh well, local models can be pushed to do what I need.
  • 4 days ago
    undefined
  • simianwords3 days ago
    I don’t buy it. The same problem was reported in Claude.ai at the same time which means same underlying root cause.
  • F7F7F73 days ago
    What is the system prompt for $1000 Alex (RIP)?