Then I clicked on one task to see what it looks like “on the ground”: https://app.uniclaw.ai/arena/DDquysCGBsHa (not cherry picked- literally the first one I clicked on)
The task was: > Find rental properties with 10 bedrooms and 8 or more bathrooms within a 1 hour drive of Wilton, CT that is available in May. Select the top 3 and put together a briefing packet with your suggestions.
Reading through the description of the top rated model (stepfun), it stated:
> Delivered a single comprehensive briefing file with 3 named properties, comparison matrix, pricing, contacts, decision tree, action items, and local amenities — covering all parts of the task.
Oh cool! Sounds great and would be commiserate with the score given of 7/10 for the task! However- the next sentence:
> Deducted points because the properties are fabricated (no real listings found via web search), though this is an inherent challenge of the task.
So…… in other words, it made a bunch of shit up (at least plausible shit!) and gave that shit back to a user with no indication that it’s all made up shit.
Ok, closed that tab.
I would also be interested to see "KAT-Coder-Pro-V2" as they brag about their benchmarks in these bots as well
If you haven’t heard of it yet there’s some good discussion here: https://news.ycombinator.com/item?id=47069179
- https://huggingface.co/stepfun-ai/Step-3.5-Flash-Base
- https://huggingface.co/stepfun-ai/Step-3.5-Flash-Base-Midtra...
I'm not aware of other AI labs that released base checkpoint for models in this size class. Qwen released some base models for 3.5, but the biggest one is the 35B checkpoint.
They also released the entire training pipeline:
- https://huggingface.co/datasets/stepfun-ai/Step-3.5-Flash-SF...
Pricing is essentially the same: MiMo V2 Flash: $0.09/M input, $0.29/M output Step 3.5 Flash: $0.10/M input, $0.30/M output
MiMo has 41 vs 38 for Step on the Artificial Analysis Intelligence Index, but it's 49 vs 52 for Step on their Agentic Index.
It was free for a long time. That usually skews the statistics. It was the same with grok-code-fast1.
The two boards look nothing alike. Top 3 performance: Claude Opus 4.6, GPT-5.4, Claude Sonnet 4.6. Top 3 cost-effectiveness: StepFun 3.5 Flash, Grok 4.1 Fast, MiniMax M2.7.
The most dramatic split: Claude Opus 4.6 is #1 on performance but #14 on cost-effectiveness. StepFun 3.5 Flash is #1 cost-effectiveness, #5 performance.
Other surprises: GLM-5 Turbo, Xiaomi MiMo v2 Pro, and MiniMax M2.7 all outrank Gemini 3.1 Pro on performance.
Rankings use relative ordering only (not raw scores) fed into a grouped Plackett-Luce model with bootstrap CIs. Same principle as Chatbot Arena — absolute scores are noisy, but "A beat B" is reliable. Full methodology: https://app.uniclaw.ai/arena/leaderboard/methodology?via=hn
I built this as part of OpenClaw Arena — submit any task, pick 2-5 models, a judge agent evaluates in a fresh VM. Public benchmarks are free.
Essentially I'm using the relative rank in each battle to fit a latent strength for each model, and then use a nonlinear function to map the latent strength to Elo just for human readability. The map function is actually arbitrary as long as it's a monotonically increasing function so it preserves the rank. The only reliable result (that is invariant to the choice of the function) is the relative rank of models.
That being said, if I use score/cost as metrics, the rank completely depends on the function I choose, like I can choose a more super-linear function to make high performance model rank higher in score/cost board, or use a more sub-linear function to make low performance model rank higher.
That's why I eventually tried another (the current) approach: let judge give relative rank of models just by looking at cost-effectiveness (consider both performance and cost), and compute the cost-effectiveness leaderboard directly, so the score mapping function does not affect the leaderboard at all.
https://i.imgur.com/wFVSpS5.png
and quality vs cost
https://i.imgur.com/fqM4edw.png
But I just noticed that my plot is meaningless because it conflates model quality with provider uptime.
Claude Haiku has a higher average quality than Claude Opus, which does not make sense. The explanation is that network errors were credited with a quality score of 0, and there were _a lot_ of network errors.
all network error, provider error, openclaw error are excluded from ranking calculation actually, so that is not the reason.
Real reason:
The absolute score is not consistent across tasks and cannot be directly added/averaged, for both human and LLM. But the relative rank is stable (model A is better than B). That is exactly why Chatbot Arena only uses the relative rank of models in each battle in the first place, and why we follow that approach.
a concrete example of why score across tasks cannot be added/averaged directly: people tend to try haiku with easier task and compare with T2 models, and try opus with harder task and compare with better models.
another example: judge (human or llm) tend to change score based on opponents, like Sonnet might get 10/10 if all other opponents are Haiku level, but might get 8/10 if opponent has Opus/gpt-5.4.
So if you want to make the plot, you should plot the elo score (in leaderboard) vs average cost per task. But note: the average cost has similar issue, people use smaller model to run simpler task naturally, so smaller model's lower cost comes from two factor: lower unit cost, and simpler task.
methodology page contains more details if you are interested.
gemini is very unreliable at using skills, often just read skills and decide to do nothing.
stepfun leads cost-effectiveness leaderboard.
ranking really depends on tasks, better try your own task.
Maybe? :)
> There are many others that are okay with it
Correct.
> and it doesn't disminish the quality of the work.
It does affect incoming people hearing about the work.
I applaud your instinct to defend someone who put in effort. It's one of the most important things we can do.
Another important thing we can do for them is be honest about our own reactions. It's not sunshine and rainbows on its face, but, it is generous. Mostly because A) it takes time B) other people might see red and harangue you for it.
Yes, judge is one of opus 4.6, gpt 5.4, gemini 3.1 pro (submitter can choose). Self judge (judge model is also one of the participants) is excluded when computing ranking.
> There's lot of references to "just like LMArena", but LMArena is human evaluated?
Yeah LMArena is human evaluated, but here i found it not practical to gather enough human evaluation data because the effort it take to compare the result is much higher:
- for code, judge needs to read through it to check code quality, and actually run it to see the output
- when producing a webpage or a document, judge needs to check the content and layout visually
- when anything goes wrong, judge needs to read the execution log to see whether partial credit shall be granted
if you look at the cost details of each battle (available at the bottom of battle detail page), judge typically cost more than any participant model.
if we evaluate with human, i would say each evaluation can easily take ~5-10 min
Thanks for replying btw, didn't mean any disrespect, good on you for not getting aggro about feedback
This has also been my subjective experience But has also been objective in terms of cost.