9 pointsby cyp063310 hours ago4 comments
  • ggm10 hours ago
    Breakthrough is marketing. Come back with some peer review and in the meantime I'm internally translating this as an incremental improvement like most things these last 40 years or more.

    The tables of scores strongly speak to increments.

    [Edit: it's what the original article says. Not the OP's fault]

    • cyp063310 hours ago
      This is my direct translation from the subtitle of the Chinese article. Apologies if there's any inaccuracy.
      • ggm10 hours ago
        I should have said it's the original articles fault and not yours.
  • ne0phyt32 hours ago
    is it the llm model weights or the training data that's important and confidential?
  • cyp063310 hours ago
    No translation yet
  • SilverElfin10 hours ago
    Some people have claimed that LLMs that aren’t from the big foundational model providers (OpenAI, Anthropic, Gemini) are basically gaming benchmarks to get great results. Does anyone know if that’s actually true? I don’t understand this entire post but from the tables of benchmark scores, it seems like this model performs well in a large variety of things. It feels to me like the diversity of benchmarks may mean it’s not just something built to game a benchmark, right?
    • viraptor5 hours ago
      Why not just check on your real tasks? I'm quite happy with the k2.5 and glm5 performance in practice. Whether they also gamed the benchmarks is not as relevant.