75 pointsby meetpateltech5 hours ago9 comments
  • devinprater4 hours ago
    I'm glad I have chatGPT to turn that image with benchmarks into an accessible table lol. I like claude Code, but their accessibility in anything other than accidental CLI accessibility is frustrating. Try it. Load a screen reader like VoiceOver for Mac (cause I know most programmers use Macs) and go to claude.ai. In the "write your prompt to Claude" box, type something like "What will the weather be like tomorrow?" and press Enter/Return. Try closing your eyes for a good 30 seconds and within those 30 seconds, tell me how you'd know if a reply has been given by the model. Then try the same thing with ChatGPT. I would /love/ to be proven wrong.
    • edding3603 hours ago
      thanks for sharing! just tried it for the first time.. Anthropic should really do better
  • dchuk4 hours ago
    curious if the 1m context window will be default available in claude code. if so, that's a pretty big deal: "Sonnet 4.6’s 1M token context window is enough to hold entire codebases, lengthy contracts, or dozens of research papers in a single request. More importantly, Sonnet 4.6 reasons effectively across all that context."
    • pkaye4 hours ago
      Above 200k token context they charge a premium. I think its $10/M tokens of input.
      • _ink_4 hours ago
        Interesting. Is it because they can or is it really more expensive for them to process bigger context?
        • pkaye4 hours ago
          I've read that compute costs for LLMs go up O(n^2) with context window size. But I think it is also a combination of limited compute availability, users preference for Anthropic models and Anthropic planning to go IPO.
        • cube22224 hours ago
          Attention is, at its core, quadratic wrt context length. So I'd believe that to be the case, yeah.
  • a_void_sky5 hours ago
    Opus 4.6 but cheaper
  • deanc4 hours ago
    I really don't get these companies posting disingenuous benchmarks. Every time, they pick and choose who to compare against. Not comparing to the latest 5.3-codex is absurd when it's been out a couple of weeks now. Who are they trying to kid?
    • falloon4 hours ago
      If you were writing a promotional post for your new model, would you include benchmarks of a competitor that's spanking you across the board? This is marketing.
    • AdamConwayIE4 hours ago
      There aren't really any of the typical benchmark suites targeting Codex 5.3 because it's still not in the API.

      SWE bench for example creates a predictions file and evaluates the results in the harness. Without Codex 5.3 being in the API, it can't.

    • tomlis2 hours ago
      gpt-5.3-codex isn't available via the API yet. Pretty sure they were only testing via API access.
    • rvz4 hours ago
      > Who are they trying to kid?

      People who do not know how reproducible research works.

      Any benchmark that is presented by AI labs must be reproduced reliably by someone else independent of that AI lab presenting these results.

      Otherwise, not only it is biased, these numbers can be just made up for marketing purposes.

  • rishabhaiover4 hours ago
    I am not seeing it on claude-code yet
  • mudkipdev4 hours ago
    What happened to sonnet 5?
    • meetpateltech4 hours ago
      They're probably saving 5 for a bigger leap.
    • hxugufjfjf4 hours ago
      Those hours that with gentle work did frame The lovely gaze where every eye doth dwell, Will play the tyrants to the very same And that unfair which fairly doth excel:
  • cube22224 hours ago
    So tldr it seems like it's

    - a reasonable improvement over sonnet 4.5, esp. with agentic tool use

    - generally worse than opus 4.6

    Probably not worth it for coding, but a win for anybody building agentic ai assistants of any sort with Sonnet.

    • Handy-Man4 hours ago
      It’s similar to or better than Opus 4.5 as per benchmarks, while being 2x-3x cheaper, definitely worth it over Opus 4.6, if cost/tokens is the concern.

      To remind, Opus 4.5 was SOTA 2-3 weeks ago.

      • 4 hours ago
        undefined
      • adastra224 hours ago
        Yes but Opus 4.6 is a massive step up. Some applications don’t need that power though.
  • rvz4 hours ago
    Anthropic again running scared of the open weight models which are rapidly catching up to them. Not even Sonnet or Opus isn't going to help with that at all.

    It has already happened with the music gen models already. It's only a matter of time when the open weight models will overtake Anthropic.

    Expect them to dial up the scaremongering until they IPO. The Claude family of models are their only AI product that is keeping them alive.

    • throwup2384 hours ago
      What are the latest open music models?
    • catigula4 hours ago
      Chinese companies distilling frontier models is certainly a crisis but it isn't one that implies said Chinese companies are anywhere in the 'race'.
      • bigyabai4 hours ago
        The "race" matters less than making money. If those Chinese models perform well in price/performance, AGI might as well pound sand.