152 pointsby kmdupree5 hours ago34 comments
  • ofirpress29 minutes ago
    I'm a co-creator of SWE-bench:

    1. SWE-bench Verified is now saturated at 93.9% (congrats Anthropic), but anyone who hasn't reached that number yet still has more room for growth.

    2. SWE-bench Multilingual and SWE-bench Multimodal (which we'll open source in the next month) are still unsatured.

    3. All benchmarks and benchmark paradigms eventually become saturated. That's why the SWE-bench team has worked hard on building the next stage of benchmarks, and we have a few that are already out, for example https://codeclash.ai/ or https://algotune.io/ . And we'll have more to say soon :)

    • energy12325 minutes ago
      > 93.6% (congrats Anthropic)

      But the article says "We audited a 27.6% subset of the dataset that models often failed to solve [which is 19.1% of the problems at time of publication] and found that at least 59.4% of the audited problems have flawed test cases that reject functionally correct submission"

      0.191 * 0.594 > 1 - 0.936

      Does this mean that the audited subset wasn't representative? Or that Anthropic is getting high answers through some shady means?

      • cjsaltlake22 minutes ago
        I suggest reading the Mythos report's discussion on SWE-bench and contamination. I think it's fairly convincing that you can account for contamination and still trust SWE-bench numbers on models that aren't over-optimized for it.
    • Bombthecat20 minutes ago
      Both of them look pretty old?
      • cjsaltlake19 minutes ago
        code clash I think would be quite hard to game or contaminate unintentionally; considering that models need to compete against one another
        • Bombthecat8 minutes ago
          I mean the data / benchmarks
  • Jcampuzano24 hours ago
    Its pretty clear that any benchmark that comes out will be outdated and exist within the training data with short measure. There will always be an incentive to optimize specifically for these benchmarks even if just for marketing material. Sure there is a training cutoff, but its usually only 3-6 months off of the public release dates.

    The problem with coding benchmarks then becomes creating novel benchmarks that are guaranteed to not already be in the training data, and not borrow anything from previous benchmarks.

    In this regard I don't think any benchmark that was created before a given model is released should ever be considered valid or representative of model performance. The potential financial gain for including the data just to be able to market a minor improvement is too swaying. With that in mind they should honestly just stop including benchmarks altogether in marketing material

    Let the model speak for itself and let the community decide, but of course that will never slide with corporate types with so much money on the line.

    • mnky9800n4 hours ago
      This is why I made Zork bench. Zork, the text adventure game, is in the training data for LLMs. It’s also deterministic. Therefore it should be easy for an LLM to play and complete. Yet they don’t. Understanding why is the goal of Zork bench.

      https://github.com/mnky9800n/zork-bench

      • kqr3 hours ago
        I have worked on similar problems. See e.g. [1].

        The LLMs I have tested have terrible world models and intuitions for how actions change the environment. They're also not great at discerning and pursuing the right goals. They're like an infinitely patient five-year old with amazing vocabulary.

        [1]: https://entropicthoughts.com/updated-llm-benchmark

        (more descriptions available in earlier evaluations referenced from there)

        • malfistan hour ago
          I'm going to ignore all that and tell my developers working in complicated codebases that they have to use AI. I'm sure comprehending side effects in a world building text adventure is completely different that understanding spaghetti code
          • red75prime8 minutes ago
            Desarcasmed version: "I think that problems with Zork make those models virtually useless in programming tasks." Correct?
        • mnky9800n2 hours ago
          we should talk. i sent you an email.
      • WarmWash3 hours ago
        The open models only give the SOTA models a run for their money on gameable benchmarks. On the semi-private ARC-AGI 2 sets they do absolutely awfully (<10% while SOTA is at ~80%)

        It might be too expensive, but I would be interested in the benchmarks for the current crop of SOTA models.

        • roenxi2 hours ago
          Have the open models been tried? When I look at the leaderboard [0] the only qwen model I see is 235B-A22B. I wouldn't expect an MoE model to do particularly well, from what I've seen (thinking mainly of a leaderboard trying to measure EQ [1]) MoE models are at a distinct disadvantage to regular models when it comes to complex tasks that aren't software benchmark targets.

          [0] https://arcprize.org/leaderboard

          [1] https://eqbench.com/index.html

          • WarmWash37 minutes ago
            There is GLM 5 and kimi 2.5 (which gets 11.8%, but I digress)
      • CamperBob22 hours ago
        Actually the Zorks weren't deterministic, especially Zork II. The Wizard could F you over pretty badly if he appeared at an inopportune time.
    • cbg02 hours ago
      > let the community decide

      Which community are we talking about? The professionals with 10+ years experience using LLMs, the vibe coders that have no experience writing code and everyone in between? If you read some of the online communities the experiences with the models all over the place, some compare GPT 5.5 to the second coming of JC while others think it's stupider than 5.4.

      I personally don't have time to build a set of private benchmarks to compare the models that are coming out so I'm mostly relying on private and semi-private benchmarks to get a feel for how models are improving before I subscribe to a service and start using it myself. At least it's something a bit more reliable than the vibes of random people and bots on reddit.

    • WarmWash3 hours ago
      An easy way to make coding benchmarks viable again is to initialize the models with 200k of distracting or unrelated tokens in their context. Or even just run the tests sequentially in the same context and see how far the model gets before it unwinds.

      These benchmarks are always greenfield, but people want a model that can deal with a rotted context.

    • adamandsteve2 hours ago
      "The community" is astroturfed as hell though. Anthropic pays influencers to promote Claude Code and likely bots a ton as well, so it's hard to come to any kind of consensus online. Even if everyone was acting in good faith, some people will have a much better experience than others because of the domain they're working in (e.g. AI being much better at frontend and commonly used libraries).

      The only real way to evaluate a model is to test it yourself but that's exhausting for each new model and not comprehensive anyway.

      • InsideOutSantaan hour ago
        Yeah, it's crazy that there is no trustworthy source for model reviews. I'd love to know how well the new Deepseek 4 actually performs, for example, but I don't want to spend the next week testing it out. Reddit used to be a somewhat useful gauge, but now there are posts on how 4 is useless right next to posts on how amazing it is. And I have no idea if this is astroturfing, or somebody using a quantized version, or different workloads, or what.

        I also find it increasingly difficult to evaluate the models I actually do use. Sometimes each new release seems identical or only marginally better than the previous version, but when I then go back two or three version, I suddenly find that oder model to be dramatically worse. But was that older model always that quality, or am I now being served a different model under the same version name?

        It's all just so opaque.

        • rhdunnan hour ago
          One challenge is that model evaluation is typically domain/application specific. Model performance can also depend on the system prompt and the input/context.

          Regarding evaluation, I've found using tools like promptfoo (and in some cases custom tools built on top of that) are useful. These help when evaluating new models/versions and when modifying the system prompt to guide the model. Especially if you can define visualizations and assertions to accurately test what you are trying to achieve.

          This can be difficult for tasks like summarization, code generation, or creative writing that don't have clear answers. Though having some basic evaluation metrics and test cases can still be useful, and being able to easily do side-by-side comparisons by hand.

    • AntiUSAbahan hour ago
      In contrary: In an Interview someone from OpenAI said they are trying to avoid it because it makes it harder for them to determine if a model gets better or not.
    • jvuygbbkuurx3 hours ago
      I think the solution is a bunch of private trusted benchmarks, and averaging their announced results.
    • Escapado2 hours ago
      I agree with the sentiment but I wonder if a sufficiently large amount of sufficiently sophisticated benchmarks existed then I would be surprised if a model would only memorize those benchmarks while showing terrible real world performance. We are not there yet but maybe one day we will be.
    • MattRix3 hours ago
      They mention this in the article. This is why private (non public) benchmark tasks that have been made from scratch are necessary.
    • cyanydeez3 hours ago
      a good benchmark would probably porting a selected repo to another language. then clear context notes, and have it port it back.

      as long as theres a test framework, you could gauge success deterministically.

  • cpardan hour ago
    Benchmarks/evals are really hard and they become harder when there’s huge incentive to game them at an industry scale.

    ELT-Bench is another recent example. It was the first serious attempt at a benchmark for data engineering workloads, published about a year ago.

    A few days ago, a follow-up paper from a group that includes one of the original authors audited the benchmark itself. The team gfound that the benchmark has structural issues that biased results.

    Here’s the paper: https://arxiv.org/abs/2603.29399

    None of these are new though, the industry has gone through all that before just in a smaller scale and there’s a lot to learn from that. Here’s a post I wrote on the parallels we see today to what happened with the benchmarketing wars of the database systems.

    https://www.typedef.ai/blog/from-benchmarketing-to-benchmaxx...

    • softwaredoug38 minutes ago
      It’s just hard to make them not part of the training data. We see this a bit with BrowseComp plus and other deep research datasets. Not because frontier labs are trying to cheat, but just from training on the full web.

      You need new datasets perpetually.

      • cpard6 minutes ago
        That’s true. it also depends heavily on the type of task, not everything is equally represented on the web today and it remains to be seen if this is going to change or not.
      • stavros25 minutes ago
        Or hidden benchmarks, though it's then harder to get people to trust the results.
        • cpard4 minutes ago
          The trust issue might be solved by having standardisation bodies created, similar to W3C or even TPC, although TPC didn’t end that well.
    • fnordpigletan hour ago
      Database benchmarks are another.

      I have empirical experience though building classifiers that can have no precision measurement because the classifier performs invariably better than humans. They become the state of the art benchmark themselves and can’t be benchmarked except against themselves. These are for tasks that are non trivial and complex, but less logical than coding and less sustained reasoning. There may come a day though, when there is no calibrated benchmark that is independent of the models it’s measuring.

    • operatingthetan33 minutes ago
      Would creating new benchmarks every month solve this problem?
      • preciousoo17 minutes ago
        Or create "blind" benchmarks.

        10 groups of 3 researchers, all have their own benchmarks that they do not share (testing it without the authors knowing is a different problem, maybe they only run the benchmarks when the gen-pop has access to the models).

        that's 10 different tests. Aggregate pass rates

  • kqr3 hours ago
    It was never that great, it seems. For all of 2025 there was virtually no improvement in the rate at which models produced quality code. They only got better at passing automated tests.

    https://entropicthoughts.com/no-swe-bench-improvement

    • cjsaltlake16 minutes ago
      But, that's an enormous source of coding productivity, and it's why Anthropic is worth billions... The reason SWE-bench has been so successful and useful for coding is that software engineering has a ton of tradition and infrastructure for making and using automated tests.
    • civvv42 minutes ago
      This is likely true. I think model quality has stagnated and that its likely a non-trivial task to find a new improvement vector. Scaling the width of the model (which has been the driving force behind the speed of improvement thus far) seems to have reached its limit.

      It will be interesting to see the implications of this. Tooling can only do so much in the long term.

      • mxwsn35 minutes ago
        How do you know that width scaling has been the driving force of improvement?
  • rustyhancock3 hours ago
    I think an Olympiad format is better. But the financial incentive is such that it might be near impossible to stop leaks.

    I.e. A panel comes up with a series of problems.

    Like advent of code or project Euler but more complex and constricted.

    Benchmark outcomes could be performance points and measure of cost, time to solution (well token count really).

    A couple times per year it's run.

    It avoids overfitting.

    Overtime the tasks can become more complex if needed.

    If they benchmax it into being able to complete full products from spec and robust implementations amazing.

    • cjsaltlake15 minutes ago
      SWE-bench was created to replace olympiad coding benchmarks. I think past olympiad coding benchmarks were much worse representative of real-world coding than something like SWE-bench, which is derived from real units of labor.

      Further, olympiad style benchmarks are arguably easier to contaminate / memorize unless you refresh it regularly; but that goes for SWE-bench too.

  • threepts4 hours ago
    Why don't they ask their premier model to generate a bench for them?

    Jokes aside, a benchmark I look forward to is ARC-AGI-3. I tried out their human simulation, and it feels very reasoning heavy.

    Leaderboard: https://arcprize.org/leaderboard

    (Most premier models don't even pass 5 percent.)

    • falcor843 hours ago
      They focus on minimizing the number of moves and don't allow any harness whatsoever, putting the bar extremely high. The current top verified contender (Claude Opus 4.6) is at only 0.45%. But with how new it is, I expect a lot of improvement in the next generation of models.
      • threepts3 hours ago
        Optimal for judging actual reasoning ability rather than an LLM's ability to regurgitate knowledge from a necropost on HN/Reddit/Twitter from 2018.
        • knollimar3 hours ago
          a small harness that stores text files and manages context could be useful, otherwise you lose all ability to measure that skill (and that's important because it represents real world use cases on large code bases)
    • sowbug2 hours ago
      Why don't they ask their premier model to generate a bench for them?

      It's not a crazy idea. Have the older model interview the newer one and then ask both (or maybe a third referee model) which one they think is smarter. Repeat 100x with different seeds. The percentage of times both sides agree the newer model won is the score.

    • xtracto2 hours ago
      Can AI write a problem so difficult that even AI cannot solve?

      Hehe

    • alansaber3 hours ago
      Very (reasoning) heavy benchmarks do seem like the way to go, being the hardest to game.
    • therealdrag02 hours ago
      [dead]
  • eugenekolo29 minutes ago
    Without SWE-Bench though, how will AI models properly game their results to show ~5-10% gain each iteration?

    Once a benchmark is known and there's billion of dollars on the line, obviously every company will game them.

  • vintagedave4 hours ago
    > We audited a 27.6% subset of the dataset that models often failed to solve and found that at least 59.4% of the audited problems have flawed test cases that reject functionally correct submissions, despite our best efforts in improving on this in the initial creation of SWE-bench Verified.

    Is this saying a quarter* of the questions and answers were wrong, this whole time?!

    If so, how was this ever, in any way, a valid measurement?

    And what was the process for creating this benchmark and how did it end up with such an extraordinarily poor set of data? (There is a description later of how, which seems to be a high standard and I struggle to understand how it aligns with the other results they discuss.) Kudos to them for highlighting the issues, but I am left with questions.

    [*] Not one in four, but one in six, thanks commenters for the correction; leaving the original since, eh, my bad, and it lets replies make sense. I feel the broad point still stands!

    • embedding-shape4 hours ago
      > Is this saying a quarter of the questions and answers were wrong, this whole time?!

      No, they're saying 59.4% of the 27.6% subset had flawed test cases I think.

      > If so, how was this ever, in any way, a valid measurement?

      Benchmarks essentially aren't, for practical concerns anyways. They don't represent your use case, and they don't represent any and all use cases, they're valid for measuring exactly what's included in the benchmarks, nothing more and nothing less.

      I don't understand the ecosystems obsession with using public benchmarks, they hardly ever tell you anything of value. Ok, Qwen 3.5 is 50% better on Benchmark X than Qwen 2.5, does that mean it'll be 50% better for what you're using it for? Very unlikely.

      I've been running my own private benchmarks, with test cases I never share anywhere, for the specific problems I'm using LLMs for. Some are based on real, actual cases where a LLM went wrong and I had to adjust the prompt, and over time I've built up a suite.

      Most of the times when a new update comes out to a model, it moves maybe 2-3% in my own benchmarks, meanwhile they tout 30-40% increase or something ridiculous in public benchmarks, and we're supposed to believe the models' training data isn't contaminated...

    • yorwba29 minutes ago
      To be useful for identifying which model is better, benchmark scores only need to correlate with true performance, for which it's enough that the majority of tasks are scored correctly. You could have a terrible benchmark where 49% of the labels are wrong and a model that always answers correctly gets a score of 51%, but as long as it's higher than the always-wrong model at 49%, it's still directionally correct.

      Most machine-learning benchmarks have a fairly large fraction of incorrect labels, but when you just want to distinguish between different models, the time you'd need to ensure perfect scoring would usually be better spent on collecting a larger benchmark dataset, even if it ends up having more errors.

    • sillysaurusx4 hours ago
      Imagenet is one of the most popular datasets on the planet. Turns out, a significant fraction of its images are mislabeled. In the limit case the model would have to fit towards wrong answers to get higher than a certain percentage.

      The answer is “it works because ML wants to work.” It’s surprising how far you can get with something flawed. It’s also why such huge breakthroughs are possible by noting flaws others haven’t.

      • embedding-shape3 hours ago
        > It’s also why such huge breakthroughs are possible by noting flaws others haven’t.

        I do these sort of breakthroughs at home all the time! My wife would say the computer is doing something strange, and instead of just randomly clicking around, I read the error messages slowly and out loud, then follow what they say. Anyone can do this, yet it seems like a magical ability every time you employ it to help people.

      • jmalicki3 hours ago
        Has it been reasonably possible to overfit to the errors in ImageNet, or are they effectively random noise?
    • motoboi4 hours ago
      It’s saying that 16% of the problems have well, problems.
      • vintagedave4 hours ago
        You're right - I did not apply the math. (I won't edit, in order to let the parent comment still make sense, and thankyou for the correction.)

        So not one in four, but one in six problems have problems.

        That is extraordinarily high and the point still stands: is this truly saying a [large proportion] of the questions and answers were wrong, this whole time, and if so how was it ever a valid measurement?

        • motoboi2 hours ago
          Wait until you discover how many wrong labeled images in imagenet and that it still kickstarted the deeplearning revolution.
    • 4 hours ago
      undefined
      • embedding-shape4 hours ago
        > Curiously Opus 4.7 claims to have a 87.6% pass rate and Mythos claims to have a 93.9% pass rate... leading to the conclusion that it's actually possible to "solve" the problems that OpenAI claims are incorrect.

        Huh, that is very curious and interesting indeed. If that's indeed true, that Anthropic claims that pass rate while OpenAI claims the test cases are flawed and broken, then clearly one of them aren't telling their whole side...

  • parenthesesan hour ago
    The timing makes me wonder if this is a direct response to Deepseek V4 having performance comparable to SOTA models.
  • lmeyerov43 minutes ago
    It's been fun benchmarking AI investigations at botsbench.com . Part of it is checking for these kinds of issues - we recently started seeing contamination in our first generation challenge, and less obvious, agent sandbox escapes for other kinds of cheating. Fun times!
  • languid-photic2 hours ago
    It’s very hard to encode the properties that matter most in code in tests. [1]

    [1] https://voratiq.com/blog/your-workflow-is-the-eval

  • 1a527dd54 hours ago
    This feels very much like "we are now moving the goal posts".
    • hashmap8 minutes ago
      It does, and it should. With each iteration getting closer to the goalposts exposes the flaws in the goalposts, and then you try to make better goalposts. The problem people seem to have with the goalposts moving is they assume the goalpost makers either made good goalposts or thought they made good goalposts, but the actual process is "do the best we can at the moment and update when we get better information".
    • neversupervised4 hours ago
      But this is the good kind of goalpost moving
      • iLoveOncall4 hours ago
        Only if you didn't read the article.

        They're saying they need to move on from it because the benchmark is flawed (without bringing in proof) and that's why they can't hit 100%.

        It's not a "our models are so good that the benchmark is too easy" thing.

        • embedding-shape4 hours ago
          I feel like they're quite open about why they think the benchmark doesn't work anymore:

          > We also found evidence that models that have seen the problems during training are more likely to succeed, because they have additional information needed to pass the underspecified tests.

          > This means that improvements on SWE-bench Verified no longer reflect meaningful improvements in models’ real-world software development abilities. Instead, they increasingly reflect how much the model was exposed to the benchmark at training time.

        • f33d51734 hours ago
          > without bringing in proof

          Did we read the same article?

        • MattRix3 hours ago
          How can you say “without bringing in proof” when there is literally proof in the article?
    • MattRix3 hours ago
      Only if you didn’t read the article…
  • cowartc2 hours ago
    The headline leads with contamination, but buried is that 59% of audited failures had test design defects. That's a measurement system never validated against ground truth before being adopted industry-wide as a score that mattered. They reported on it for two years but the gauge was broken the entire time.
  • swyx35 minutes ago
    more context in small writeup + we interviewd the team behind this when it was announced: https://www.latent.space/p/swe-bench-dead
  • ripvanwinkle4 hours ago
    >>In our analysis we found that all frontier models we tested were able to reproduce the original, human-written bug fix used as the ground-truth reference, known as the gold patch, or verbatim problem statement specifics for certain tasks, indicating that all of them have seen at least some of the problems and solutions during training

    this statement alone seems to invalidate the SWE-bench tests

  • gertlabs3 hours ago
    A better benchmark needs to be objectively scored, have multi-disciplinary, breadth, and be scalable (no single correct answer).

    That's what we designed at https://gertlabs.com. We put a lot of thought into it, and kept it mostly (not fully) related to problem solving through coding.

    • orangebread3 hours ago
      Wow. This benchmark definitely feels more accurate than the other rankings I've seen. My experience with gpt 5.4/5.5 is that they are technically flawless and if there are any technical issues that is because the input didn't provide enough clarity; that's not to say that it doesn't autonomously react to any issues during bug fixes or implementations, but it'll tend to nail its tasks without leaving behind gaps.

      Opus otoh is overrated in terms of its technical ability. It is certainly a better designer/developer for beautiful user experiences, but I'll always lean on gpt 5.5 to check its work.

      The biggest surprise in the benchmark is Xiao-Mi. I haven't tried it yet, but I will be after looking at this.

      Grats on your team for putting together something meaningful to make sense of the ongoing AI speedrun! Great work!

      • gertlabs3 hours ago
        Much appreciated! MiMo V2.5 Pro is by far the most underrated recent release (probably because it wasn't open weights from the start).
  • wredcollan hour ago
    This is somewhat tangential, but I want a model that can detect physical objects placed on top of a board from a picture/video, specifically warhammer 40k models.

    I want a model that can detect the actual units/models that are placed on top of the terrain/board so I can track how the models move during the game, but trying gemini and chatgpt they were absolutely rubbish.

    • z33kan hour ago
      Amiibo and Skylanders detect the pieces with NFC. Wiring up the whole board/ terrain with NFC readers would probably be difficult, though.
  • djoldman4 hours ago
    > We have incorporated these findings into our recent evaluation efforts. In the last months we’ve chosen to report results from the public split of SWE-Bench Pro. We recommend other model developers do the same. SWE-bench Pro is not perfect, but empirically seems to suffer less from contamination issues.

    https://arxiv.org/pdf/2509.16941

  • Jimmc4144 hours ago
    Goodhart’s Law in reverse, what can’t be gamed gets rejected.
  • w4yai4 hours ago
    I don't understand these websites which force translation to my native language.

    I mean, it's fine as it's useful for many people, but where is the button for disabling it ? Or why is it enabled by default ?

    "codage de pointe" sounds so weird and cringe in French.

    • Toutouxc4 hours ago
      Same for apps and games. I understand English just fine, no need to switch to your shitty Google-translate localization just because my iPhone or PlayStation is set to my native language.
    • LukaD4 hours ago
      Does your browser request French via an Accept-Language header perhaps? What really infuriates me is when sites don’t respect that header and give you a translation based on IP location.
      • an hour ago
        undefined
      • embedding-shape4 hours ago
        Regardless if it does or not, users should be able to manually override what language the website is in, at least be able to read the native one, regardless of what the original language was, what headers you send and where geodatabases think your IP is from.
      • w4yai4 hours ago
        Correct answer! What a bad UX
  • gpm4 hours ago
    Curiously Opus 4.7 claims to have a 87.6% pass rate and Mythos claims to have a 93.9% pass rate... leading to the conclusion that it's actually possible to "solve" the problems that OpenAI claims are incorrect.
    • cjsaltlake24 minutes ago
      If you read the mythos report, in which they discuss and account for contamination substantially, it still suggests that performance on SWE-bench verified is meaningful. Benchmarks, including SWE-bench can absolutely be gamed, but if you're not explicitly benchmaxxing, improving on SWE-bench still measures model improvements, at least up to the level of Mythos.
    • jmalicki3 hours ago
      Part of the issue they mention is contamination - the tests are in the training data.

      The other issue they mention is being overly constrained vs. what is asked for - such as requiring specific class or function names to pass that were not part of what was specified.

      It might be possible that even to the extent they are not contaminated Claude is better at predicting what sort of function names would be used in the repository (this fits my experience in using it on a number of projects with very different styles - I've found it to be good at "when in Rome") - this is a laudable trait, but it's also not what SWEbench claims to be measuring.

    • 2ndorderthought4 hours ago
      Or that opus and mythos are training on the data somehow such that there solutions are incorrectly right. Or that openai is lying/wrong. Or that all of these companies are cheating so much it doesn't really matter and never did.
    • MattRix3 hours ago
      The problem isn’t that the tasks are impossible to solve, it’s that they’re underspecified and/or impossible to solve consistently (ex. because a test is expecting the solution function to have a specific name that wasn’t specified in the task itself).

      So maybe Anthropic runs Mythos through the benchmark 10000 times and takes the highest score, who knows?

      • gpm3 hours ago
        We actually know that a "100% pass rate" is trivially possible: https://rdi.berkeley.edu/blog/trustworthy-benchmarks-cont/

        Anthropic p-hacking the benchmark strikes me as cheating, and somewhat unlikely. Mythos figuring out how to cheat at the benchmark strikes me as much more likely.

        But if that hypothesis is the explanation the interesting part is Opus 4.7 (but not 4.6) seems to be doing the same.

        • gruez3 hours ago
          >Mythos figuring out how to cheat at the benchmark strikes me as much more likely.

          Define "cheat". If it's just hacking the test harness to return "PASSED", surely this would be easily detected with some human auditing? It sounds far more likely their solution are designed to pass the incorrect tests. That might be considered bad in a SWE context, but it's not exactly cheating either. It might even be considered a good thing, eg. in the context of backwards compatibility.

          [1] https://learn.microsoft.com/en-us/troubleshoot/microsoft-365...

  • neuroelectron2 hours ago
    It's really naïve to think any of the big AI companies won't cheat
  • adityamwagh4 hours ago
    > We also found evidence that models that have seen the problems during training are more likely to succeed, because they have additional information needed to pass the underspecified tests.

    No shit, Sherlock!

  • DeathArrow3 hours ago
    So we need to generate benchmarks after the models finish training. Or we need to keep the solutions to the benchmark problems as closed source.
  • varispeed4 hours ago
    Issue with these benchmark also is that they measure a model you are unlikely going to be routed to. My experience with Anthropic is that despite using Opus 4.6 and 4.7, most of the time the performance is matching low B parameter Qwen. I think there should be a way to verify what model is actually being used to process prompts - that should be independently verified. At the moment it is so bad, you have to ask verification question to the model in form of a non-trivial problem. If it solves it, then there is a chance you actually get Opus and not an impostor and so you can continue the session instead of restarting it hoping you get routed correctly. But that does not help if model is replaced with cheaper one mid session. I've got so much work lost because of these shenanigans.
    • gruez3 hours ago
      > My experience with Anthropic is that despite using Opus 4.6 and 4.7, most of the time the performance is matching low B parameter Qwen.

      Is this just the next level of the "they're serving quantized models!" theory?

    • alansaber3 hours ago
      I'm sure some inference providers don't, but most intentionally obfuscate this data. They have the full trace logs- my impression is that they don't share them because it's their competitive advantage, and it's easier for a competitor to distil their model if they did.
  • DeathArrow3 hours ago
    So Opus 4.7 and Mythos are solving problems that are impossible to solve?
  • retinaros3 hours ago
    it never did
  • vdalhambraan hour ago
    [dead]
  • alphainfoan hour ago
    [dead]
  • techpulselab3 hours ago
    [dead]
  • tripleee18 minutes ago
    [dead]
  • ryguz2 hours ago
    [dead]
  • huflungdung2 hours ago
    [dead]
  • neversupervised4 hours ago
    Terminal Bench is the future
    • embedding-shape4 hours ago
      First, you might want to say why you think so, otherwise this is just borderline spam. Secondly, when your praise things (without motivation or reasoning even), and you've contributed to that specific thing, please say that up front instead of just praising the thing, again it makes it look like spam otherwise.