3 pointsby cadabrabra8 hours ago2 comments
  • minimaxir8 hours ago
    > Anthropic is claiming that Claude 5 Sonnet will cost about half as much as their current SOTA models. Therefore, expect about half the performance.

    That's not how LLM quality works.

    • cadabrabra8 hours ago
      Maybe not in theory but definitely in practice, as we’ve seen with GPT-5. These companies are lightning money on fire. If they reduce the cost, expect a proportional decrease in quality. All of the GPT-5 anecdotes confirm this. When the data and anecdotes disagree, the anecdotes are usually right, and the data is usually bullshit.
      • minimaxir8 hours ago
        GPT-5's issues were due to router shenanigans which Claude models do not do.
        • cadabrabra8 hours ago
          No dude, the latest versions of the models it routes to are markedly poorer in performance than their predecessors.

          I’m observing a law that states: There appears to be a direct relationship between model performance and cost, such that whenever a company claims to have reduced inference costs, customers immediately notice a corresponding decline in model performance.

  • bigyabai8 hours ago
    > It’s an illusion, folk. You’re being played.

    How are they "being played" if Claude 5 isn't even out yet

    • cadabrabra8 hours ago
      It’s already obvious that it will be a scam. Higher benchmark scores and lower cost are two signs that customers are about to get scammed. We saw it with GPT-5.
      • Redster8 hours ago
        Respectfully,

        Claude 3 Opus: $15.00 (Input) / $75.00 (Output) per 1M tokens

        Claude 4 Opus: $15.00 (Input) / $75.00 (Output) per 1M tokens

        Claude 4.1 Opus: $15.00 (Input) / $75.00 (Output) per 1M tokens

        Claude 4.5 Opus: $5.00 (Input) / $25.00 (Output) per 1M tokens

        • cadabrabra8 hours ago
          This actually proves my point because if you read the anecdotes, you will notice a marked decline in performance. The version number goes up but the actual performance declines. The benchmarks can tell any story you want them to.
      • bigyabai8 hours ago
        Is it? It might be possible that it's a scam, but for something to be "obvious" it has to release first.

        There are plenty of ways to reduce inference cost for a high-intelligence model. Making sparser weights, for example, can increase the parameter count while reducing the inference cost and time.

        • cadabrabra8 hours ago
          I get what you’re saying, but I still think that it will be a scam. Bookmark this thread and let’s continue the conversation after it’s released.
          • bigyabai8 hours ago
            I think you are informed by more of an emotional interest than a technical one, here. You've written several such posts and many of them are astronomically unlikely predictions.
            • cadabrabra8 hours ago
              Ok but didn’t Karpathy make it clear that we live in the vibe era? I’m inclined to trust vibes more than technical jargon, and boy are the vibes off with what’s been happening!

              Let’s see what happens :)