It spent the first few minutes analyzing the image and cross-checking various slices of the image to make sure it understood the problem. Then it spent the next 6-7 minutes trying to work through various angles to the problem analytically. It decided this was likely a mate-in-two (part of the training data?), but went down the path that the key to solving the problem would be to convert the position to something more easily solvable first. At that point it started trying to pip install all sorts of chess-related packages, and when it couldn’t get that to work it started writing a simple chess solver in Python by hand (which didn’t work either). At one point it thought the script had found a mate-in-six that turned out to be due to a script bug, but I found it impressive that it didn’t just trust the script’s output - instead it analyzed the proposed solution and determined the nature of the bug in the script that caused it. Then it gave up and tried analyzing a bit more for five more minutes, at which point the thinking got cut off and displayed an internal error.
15 minutes total, didn’t solve the problem, but fascinating! There were several points where if the model were more “intelligent”, I absolutely could see it reasoning it out following the same steps.
Whats going on? Did it just get lucky? Did it memorize the answer but misplace the pieces in its recall? Did it actually compute anything?
https://claude.ai/share/d640bc4c-8dd8-4eaa-b10b-cb3f83a6b94b
This is the board as it sees it (incorrect):
https://lichess.org/editor/kb6/pp6/2P5/8/8/3K4/8/R7_w_-_-_0_...
https://chatgpt.com/share/680f4a02-4cc4-8002-8301-59214fca78...
It worked through some stuff then decided to try and list all possible moves as there can't be that many. Tried importing stuff that didn't work, then wrote code to create the permutations.
The fact that gpt-4.5 gets 85% correctly solved is unexpected and somewhat scary (if model was not trained on this).
That's not to say "are you remembering or reasoning" means the same thing when applied to humans vs when it's applied to LLMs.
Probably should listen to psychologists and neuroscientists about this, not philosophers tbh
That means, it can literally never win a chess match, given an intentional illegal move is an immediate loss.
It can't beat a human who can't play chess. It literally can't even lose properly. It will disqualify itself every time.
--
> It shows clearly where current models shine (problem-solving)
Yeh - that's not what's happening.
I say that as someone that pays for and uses an LLM pretty much every day.
--
Also - I didn't fact check any of the above about playing chess. I choose to believe.
I expect this would dramatically improve the chess playing abilities of the competent tool using models, such as O3.
That's 20 moves. the size grows a bit in the early middle game, but then drops again in the endgame. There do exist rather artificial positions with more than 200 legal moves, but the average number of legal moves in a position is around 40.
I mentally counted the starting moves as being 8 pawns x2 = 16 pawn moves and 2x2 =4 4 knight moves, but then I doubled it for both sides to get 40 (which with hindsight was obviously wrong) and then assumed that once the pawns had moved a bit there would be more options from non-pawn pieces.
With an upper bound of ~200 in edge cases listing all possible moves wouldn't take up much room in the context at all. I wonder if it would give better results, too.
The first few could be resolved by asking it to check its moves. After a few more, I was having to explain that knights can jump and therefore can’t be blocked. It was also trying to move pieces that weren’t there, onto squares alert occupied by its own pieces, and asking it to review was not getting anywhere. 10-15 moves is very optimistic, unless it’s counting each move by either side, i.e., White moves 5-8 times and Black moves 5-8 times. Even that seems optimistic, but the lower end could be right.
https://chatgpt.com/share/680f57b6-8554-800b-a042-f640224b91...
It didn't get much further with suggestions to review. Also, the small ASCII board it generated was incorrect much earlier, but it sometimes plays without that, so I let that go.
But I wasn't thinking in text, I was thinking graphically. I was visualizing the board. It's not beyond the realm of possibility that you can tokenize graphics. When is that coming?
I cannot understate how impressive this is to me, having been involved in ai research projects and robotics in years gone by.
This is a general purpose model, given an image and human written request that then step by step analyses the image, iterates through various options, tries to write code to solve the problem and then searches the internet for help. It reads multiple results and finds an answer, checks to validate it and then comes back to the user.
I had a robot that took ages to learn to plan tic tac toe by example and if the robot moved originally there was a solid chance it thought the entire world had changed and would freak out because it thought it might punch through the table.
This is also a chess puzzle marked as very hard that a person who is good at chess should give themselves fifteen minutes to solve. The author of the chess.com blog containing this puzzle only solved about half of them!
This is not an image analysis bot, it's not a chess bot, it's a general system I can throw bad english at.
I am human and I solved this before opening the blog post, because I've seen this problem 100 times before with this exact description. I don't understand why an LLM wouldn't have done the same, because pattern matching off things you saw on the internet is IIUC the main way LLMs work.
(I am good at chess, but not world class. This is not a difficult mate in 2 problem: if I hadn't seen it, it would take a minute or so to solve, some composed 2-movers might take me 5 minutes).
The obvious moves dont work, you can see whites pawn moving forward is mate, and you can see black is essentially trapped and has very limited moves, so immediately I thought first move is a waiting move and theres only two options there. Block the black pawn moving and if bishop moves, rook takes is mate. So rook has to block, and you can see bishop either moves or captures and pawn moving forward is mate
https://www.chess.com/blog/ThePawnSlayer/checkmate-in-two-pu...
Although perhaps this is missing the point - the process and chain here in response to an image and a sentence is extremely impressive. You can argue it's not useful, or not useful for specific use cases but it's impressive.
If you just paste the image into a search engine (without needing to include the text prompt) the first result contains the solution. We live in a world where Sam Altman claims that usage of words like "please" and "thank you" in prompts have cost OpenAI "tens of millions of dollars"[0]. In this case, OpenAI's "most powerful reasoning model"[1] spends 7m 51s churning through expensive output tokens spinning its wheels before ultimately giving up and searching the internet. This strikes me as incredibly wasteful. It feels like the LLM equivalent of "punch[ing] through the table". The most impressive thing to me here is that OpenAI is getting people to pay for all this nonsense.
[0] https://www.usatoday.com/story/tech/2025/04/22/please-thank-...
Is it, though? I play at around 1000 Elo – I have a long-standing interest in chess, but my brain invariably turns on fog of war that makes me not notice threats to my queen or something – and I solved it in something like one minute. It has very little moving parts, so the solution, while beautifully unobvious, can be easily brute-forced by a human.
I haven't played chess in decades and was never any good at it. I'm basically now at the level that I know most of the basic rules of the game. And it took me maybe 5 minutes.
Clever Hans at web-scale, so to say.
So if you're impressed by a model that spent 10 minutes and single-digit dollars to not solve a problem that has been solved before, then I guess their model is working exactly as expected.
"Well, it's not a chess engine so its impressive it-" No. Stop. At best what we have here is an extremely computationally expensive way to just google a problem. We've been googling things since I was literally a child. We've had voice search with google for, idk, a decade+. A computer that can't even solve its own chess problems is an expensive regression.
from the article:
"3. Attempt to Use Python When pure reasoning was not enough, o3 tried programming its way out of the situation.
“I should probably check using something like a chess engine to confirm.” (tries to import chess module, but fails: “ModuleNotFoundError”).
It wanted to run a simulation, but of course, it had no real chess engine installed."
this strategy failed, but if OpenAI were to add "pip install python-chess" to the environment, it very well might have worked. in any case, the machine did exactly the thing you claim it should have done.
possibly scrolling down to read the full article makes you a rube though.
Suppose we removed its ability to google and it conceded to doing the tedium of writing a chess engine to simulate the steps. Is that “better” for you?
This is a bad thing because it means they gave up on solving actual problems and entered the snake oil business.
At no point during my process would I be counting pixels in the image. It feels very clearly like a machine that mimics human behavior without understanding where that behavior comes from.
But you are right. It does not actually understand anything. It is just a next-token predictor that happens to have access to Python and Bing.
> Chess Puzzle Checkmate in 2 White
does it mean we are white, or does it mean we're trying to checkmate white?
Claude reigns supreme.
- Check obvious, wrong moves.
- Ask what I need to have to win the game even if there's just black king left. Answer is I need all 3 pieces to win some day even if there's just black king on the board.
- So any moves that makes me lose my pawn or rook result in failure.
- So the only thing I can do with the rook is move it vertically. Any horizontal move allows black to take my pawn. King and pawn don't have much options and all result in pawn loss or basically skipping a turn while changing situation a little bit for the worse that makes mate in one move unlikely.
- Taking a pawn with rook results in loss of the rook which is just as bad.
- Let's look at spot next to the pawn. I'll still protect my pawn, but my rook is in danger. But if black takes rook, I can just move my pawn forward to get a mate. If they don't I can move rook forward and get a mate. Solved.
So I skipped trying to run a program and googling part, not because it didn't came to my mind but because I wanted different kind of challenge then challenge of extracting information from the internet or challenge of running a unfamiliar piece of software.
I've never met a human player that suddenly says 'OK, I need Python to figure out my next move'.
I'm not a good player, usually I just do ten minute matches against the weakest Stockfish settings so as not to be annoying to a human, and I figured this one out in a couple of minutes because there are very few options. Taking with the rook doesn't work, taking with the pawn also doesn't, so it has to be a non-taking move, and the king can't do anything useful so it has to be the rook and typically in these puzzles it's a sacrifice that unlocks the solution. And it was.