18 pointsby meetpateltech3 hours ago4 comments
  • convenwis37 minutes ago
    Is there a writeup anywhere on what this means for effective context? I think that many of us have found that even when the context window was 100k tokens the actual usable window was smaller than that. As you got closer to 100k performance degraded substantially. I'm assuming that is still true but what does the curve look like?
    • minimaxir34 minutes ago
      The benchmark charts provided are the writeup. Everything else is just anecdata.
  • minimaxir40 minutes ago
    Claude Code 2.1.75 now no longer delineates between base Opus and 1M Opus: it's the same model. Oddly, I have Pro where the change supposedly only for Max+ but am still seeing this to be case.

    The removal of extra pricing beyond 200k tokens may be Anthropic's salvo in the agent wars against GPT 5.4's 1M window and extra pricing for that.

  • dimitri-vsan hour ago
    The big change here is:

    > Standard pricing now applies across the full 1M window for both models, with no long-context premium. Media limits expand to 600 images or PDF pages.

    For Claude Code users this is huge - assuming coherence remains strong past 200k tok.

  • 3 hours ago
    undefined