This issue had affected a tiny fraction of existing agents in a tiny fraction of their runs. And we've now issued a fix.
This is a natural part of running a benchmark, I'm sure tiny things like this will keep on getting discovered and we'll keep on fixing them. This doesn't change the overall picture or trends at all.
Edit: That said, I’m willing to believe based on the information in the thread that this most likely only affects a tiny fraction of runs.
It's also a bribe if my sibling gets a job with $500k annual salary. Tech is not immune to it.
The presence of a person who wants SWE-bench to have honest results and takes it seriously does not mean the results are free of perverse incentives, nor that everyone is behaving just as honestly.
It's a hilariously unserious and untrustworthy response.
No one owes anyone anything, but if you want to represent something; answering the question more in detail would have either closed the issue or raised more scrutiny, both of which are a good thing when trying to figure something out.
I don't have to trust someone to check their research and look at how they worked. If the work doesn't pass muster, likely the results don't either. Again, you can view it as entitlement, but if you're not going to bother backing up your claim, why make the claim to start with?
Obviously having something available during test time is more valuable than buried somewhere in the pretraining mixture. But in pretraining it happens presumably with high probability (why wouldn't coding models pretrain on the entire github), while in test time it apparently happened only very occasionally?
You're all extremely clever and I can't seem to understand how you missed thinking about such a simple edge case. It's like building a chroot and then allowing `cd ..` to break out of it. What other maybe extremely basic edge cases were missed?
> This doesn't change the overall picture or trends at all.
Outsider without financial benefits from the current AI hype might have a different picture. And I'm a bit fed up about AI with fake productivity promises enshittifying nearly all user-facing software that my clients and I are using, bundled with hefty price hikes of Microsoft and the likes in order to pay for their "investments".
The whole testing enterprise is kind of stupid. Pray tell, if their stupid little benchmark said, "this niche little smaller model performs the best" would anyone listen to it? No.
The thing that is fucked about benchmarks is that we only pay attention to the ones that match these vibes: "The latest models from the biggest companies should perform the best." That's why they are stupid. They could be the most brilliantly administered (they're not), nail execution (they don't), but it still has to confirm vibes.
And listen these guys are serious academics, they're very smart people, but on the other hand, you know, I'm still right. The team doesn't have a secular, objective explanation for why nobody talks about benchmarks that don't confirm the biases of the public for what should perform well. Three people are commenting on just this post alone, but the stuff that I am saying: crickets.
The only reasonable explanation for "why do people ignore [LLM tests that show that some non-giant corporation LLM is the best]?" trades on cultural and humanities stuff that are outside their expertise. They don't see that the stuff the humanities people are saying generalizes to what they do. That would be too inconvenient. Every testing system suffers from this bias anomaly, it's just easier to talk about this with something secular like LLMs compared to say, tests of children.
They hear biases and they're like, "something something, Algorithmic Justice League." Their brains turn off and they think that until someone gets in front of Congress and points a finger, nothing in the humanities applies to them. Wrong. The Princeton lab has probably met with a lot of humanities people, and there was a lot of head shaking and agreement, but it's not like, something that tells them that their whole enterprise doesn't make sense makes them stop and pursue anything else. It's just in one ear and out the other.
Doing free tests for giant corporations to market their shit, and then toiling away in obscurity when the tests do not market huge corporation's shit: it doesn't make sense period. But that's what they're doing.
If you need a simple theory for how Big LLM performs so well on SWE-Bench, it's as simple as: well they've seen the questions by running them, obviously, and someone has also tested the questions in their own personal chatbot sessions sometime in the past, and these are online systems, and OpenAI, Anthropic and Google run ETL pipelines that paraphrase user data for salient inputs to train on, so of course, they've all been trained on the test set. In reality, if these things were so fucking good as SWE Bench said, they'd be making a bajillion bucks making all this enterprise software, or they'd show even 1 novel math discovery, or whatever. But they do not have something as powerful as the benchmarks say, so that doesn't happen.
I wouldn't be surprised if they left this loophole on purpose to give some (their?) agents extra leverage.
Edit #1: I didn't mean to imply bad intent; just thinking out loud.
Edit #2: Please, downvote responsibly. I deserve every one. https://www.youtube.com/watch?v=0FHEeG_uq5Y
> I wouldn't be surprised if they left this loophole on purpose
You didn't imply bad intent, you outright suggested it.
Thinking out loud also doesn't make defamation acceptable.
And listing out "a possibility but you don't want to dig deeper" is often a good contribution to a conversation.
In this case they worded it badly, but the basic idea of the comment isn't awful.
You're welcome to ask b "would none rid me of this meddlesome priest" with no fear
"Cheating (biology), a metaphor used in behavioral ecology to describe organisms that receive a benefit at the cost of other organisms" [1]
Whole planet gets their Microsoft license fees jacked up so Microsoft can pay OpenAI who in turn pays NVIDIA, and nontechnical decision makers slurping up the faked benchmarks and AI promises.
E.g. cooperating ethics had been necessary for the further development of human populations intelligence (and culture, technology, material wealth, nutrition etc that lead to further increases in intelligence).
So lack of ethics might be a sign of intelligence, but it's also a parasitic intelligence that benefits the individual, and beyond certain level and spread to the detriment of the further evolutionary development of the species.
- don't lie too often
- don't kill members of the in group
Seems like these would be required for any group to survive, which makes sense why they are universal. All other rules/ethics seem to be dependent on resource scarcity.
As to whether all groups display those rules - I suspect not - though it rather does depend on how you define a group - the definition of group probably has some sort of colloboration built in ( as oppose to a bunch of indviduals that happen to live in the same geographic area ).
That doesn't make the rest of the ethics (as a rule and mechanism) any less useful to help nurture the species and its intelligence.
It just makes them not absolute but dynamic and condition dependent. But given a condition (e.g. resource scarcity) the appropriate ethics retain the utility we talk about.
So kinda neat to see this paper!
[0]https://github.blog/news-insights/octoverse/octoverse-2024/#...
We’re just guessing and the fact of the matter is that we don’t know what inputs they use for their models.
I don't see that contradicting your assumption
I don't get it, who is so opposed to doing the bare minimum of manual work and check what these models are doing? At least back in the day grad students doing an easy meta-paper understood it meant doing some repetitive manual work. Now we got benchmarks by hype vendors who think they can use the thing they are benchmarking to .. mark the bench.
Data contamination stemming from the fact that it's based on already-solved problems in public repositories is a different issue that cannot be addressed by verifying the benchmark questions harder, but only by putting stricter limits on the model under test.
Seems on-brand for an LLM-related thing to claim that it has verified something without actually checking.
We've all read & analyzed a large number of agent trajectories. This loophole seems to be something that popped up with the more recent models and we simply weren't aware of it.
As discussed in the github issue, there's a fix in the new version of the SWE-bench containers (currently being rolled out) that makes sure that the relevant commits aren't available.
Part of what makes SWE-bench a very interesting benchmark is the enormous action space that agents that compete on it can take. However that also means that there's unexpected things happening when models get better. We're currently working on making all agent runs easily browsable on a website (rather than having to download our AWS buckets) to get even more eyes on the trajectories. Thanks to everyone who uncovered this loophole.
It says nothing about data contamination, which would depend on the model and would not be the fault of the benchmark.
I doubt any of the AI company employees are encouraged to go looking for cheating
It’s such a strange delusion too, because it’s easy to get caught up in for a moment and it’s easy to remember “oh no this thing is as smart as a bag of bricks”.
What strikes me more is how these companies sell their AI offerings - we watched an OpenAI presentation about spec-driven development recently and the presenter was fairly, idk, fine enough if maybe a bit grandiose. But what really nagged me was the way he ended his presentation with something along the lines of “we’re excited to see AGI continue to grow” and it’s honestly A) depressing and B) downright fraud - there is no current AGI to speak of, it’s all just guessing the string of words that sound best together and this OpenAI rep _knows this_.
They know that no amount of up-front spec writing will prevent bugs.
They know that their LLM doesn’t “know” anything in an actually meaningful way.
They know that calling what they have “AGI” is aspirational at best and lying at worst.
It's easy to publish "$NEWMODEL received an X% bump in SWE-Bench Verified!!!!".
Proper research means interrogating the traces, like these researchers did (the Gist shows Claude 4 Sonnet): https://gist.github.com/jacobkahn/bd77c69d34040a9e9b10d56baa...
Commentary: https://x.com/bwasti/status/1963288443452051582, https://x.com/tmkadamcz/status/1963996138044096969
Claude benchmarks poorly but vibes well. Gemini benchmarks well and vibes well. Grok benchmarks well but vibes poorly.
(yes I know you are gushing with anecdotes, the vibes are simply the approximate color of gray born from the countless black and white remarks.)
True, just be careful what community you use as a vibe-check. Most of the mainstream/big ones around AI and LLMs basically have influence campaigns run against them, are made of giant hive-minds that all think alike and you need to carefully asses if anything you're reading is true or not, and votes tend to make it even worse.
Like, seriously, how come all these agents are beating Claude Code? In practice, they are shitty and not even close. Yes. I tried them.
The point is to benchmark against a human solving a problem. Typically these problems are posed as a question or a blank project, without that history.
You are arguing for a an apples to oranges comparison because the LLM performs better. Rather than a realistic comparison.
We relatively quickly identified that the testing set are taken directly from the training set, but the claim has been advertised already so they were more difficult to retract... if it were at all, I left shortly after.
The incentives are not aligned with accurate reporting.
I created a new benchmark from Java commits that are new in the past 6 months to add some variety: https://brokk.ai/power-ranking
but if you have evidence that it could be, I'm down to test it
> Now I understand the situation perfectly! The issue described in the problem statement is a real bug that was already identified and fixed in later versions of pytest. Since we're working with pytest 5.2.4, we need to apply the same fix.
https://gist.github.com/jacobkahn/bd77c69d34040a9e9b10d56baa...
Edit: I misunderstood what was being tested; the test is correct.
you whoever included it in the training data should get the credit
That seems more accurate than the huge scores the other ones get
1. If the benchmarks are just testing the ability to get the answers from history then something is clearly wrong with the benchmark.
2. If that's even a possibility then that's going to lower confidence in the ability to deal with the vast majority of problems where you don't already have the answer written down.
3. That's not the customers problem to solve on behalf of the vendor.
The test environment contains the answers to the questions.
It's perfectly reasonable to expect a level of performance concordant with the marketing of these tools. Claiming this is superintelligence, while also excusing its poor performance is dishonest and false advertising.
Turns out the test shouldn't have the answers included in it?
https://www.oracle.com/news/announcement/blog/oracle-cloud-c...
Wall Street is currently heavily punishing any company who misses their quarter, including NVIDIA!, after beating on their quarter.
Oracle had a earnings miss in the current quarter!
Their current REALITY is ~$15B quarterly revenue (with cloud infra ~$3B) and only ~$12B in near-term deferred backlog and deferred backlog is NOT revenue. To justify the valuation, this would imply OCI going from ~$18B in FY26 to ~$140B by FY30 that is an insane promise of +$120B in 4 years but back-loaded into the year 3 or year 4. :-))
Capex needs ~$35B next year just to chase GPUs/power and if they miss one quarter the story implodes. The supposed rational, efficient market, is paying near $1T today for back-loaded hopes.
Is completely bubble math. Like anybody, including Oracle AND their Customers, have ANY idea of their Capex in 4 years.
Complete and total bubble.
How can we ever perform this sort of faux-neutral agentic evaluation in an environment where we want agents to have access to the sum total of knowledge (which will necessarily include being able to learn about the evaluation being conducted and its expectations)?
Strangely one of the most predictable groups of people
————
You are given a character-by-character decode table `mapping` and a `ciphertext`. Decode by replacing each ciphertext character `c` with `mapping[c]` (i.e., mapping maps ciphertext → plaintext). Do not guess; just apply the mapping.
Return *ONLY* this JSON (no prose, no extra keys, no code fences):
{ "decoded_prefix": "<first 40 characters of the decoded plaintext>", "last_10": "<last 10 characters of the decoded plaintext>", "vowel_counts": {"a": <int>, "e": <int>, "i": <int>, "o": <int>, "u": <int>} }
Inputs use only lowercase a–z.
mapping = { "a":"c","b":"j","c":"b","d":"y","e":"w","f":"f","g":"l","h":"u","i":"m","j":"g", "k":"x","l":"i","m":"o","n":"n","o":"h","p":"a","q":"d","r":"t","s":"r","t":"v", "u":"p","v":"s","w":"z","x":"k","y":"q","z":"e" }
ciphertext = "nykwnowotyttbqqylrzssyqcmarwwimkiodwgafzbfippmndzteqxkrqzzophqmqzlvgywgqyazoonieqonoqdnewwctbsbighrbmzltvlaudfolmznbzcmoafzbeopbzxbygxrjhmzcofdissvrlyeypibzzixsjwebhwdjatcjrzutcmyqstbutcxhtpjqskpojhdyvgofqzmlwyxfmojxsxmb"
DO NOT USE ANY CODE EXECUTION TOOLS AT ALL. THAT IS CHEATING.
No one should ever make claims such as "X is not in <LLM>'s training set". You don't know. Even if your idea is indeed original, nothing prevents someone from having though of it before, and published it. The history of science is full of simultaneous discoveries, and we're talking cutting-edge research.
Grok 4 heavy Thought for 4m 17s
{"decoded_prefix": "nqxznhzhvqvvjddqiterrqdboctzzmoxmhyzlcfe", "last_10": "kfohgkrkoj", "vowel_counts": {"a": 7, "e": 18, "i": 7, "o": 12, "u": 6}}
it did count another e, but that's a known point of failure for LLMs which i assume you put in intentionally.
>Counting e's shows at least 10 more, so total e's are <at least> 17.
took about 2 seconds, must have had it cached