16 pointsby gargi_tinyfish5 hours ago12 comments
  • kathyyyyyyyliu3 hours ago
    Promising numbers, especially if Online-Mind2Web better reflects real multi-step workflows than WebVoyager. Would love to see a quick breakdown of failure modes and variance by difficulty -- 80%+ on truly stateful web tasks is a strong claim. Either way, more realistic evals are a big win for the space.
  • salmacodes5 hours ago
    Been trying to get Operator to handle a multi-step workflow for a client (login → navigate nested menus → fill form → confirm) and it just... breaks in the middle every time.

    Seeing the hard-task numbers here makes that make a lot more sense.

    Honestly the more interesting thing to me is the benchmark critique. WebVoyager being the default eval while only agreeing with humans 62% of the time is kind of damning for the whole space. Has anyone else tried running their agent against Online-Mind2Web?

  • dontrack-rxv18 minutes ago
    how did a random start up beat openai/claude?!?
  • 19 minutes ago
    undefined
  • codebyron5 hours ago
    The 15-point drop from easy to hard is the number that stands out to me.

    That suggests the architecture handles state accumulation across steps without compounding errors — which is the thing that kills most agent pipelines. Every other agent here shows exponential degradation as task length increases, which is what you'd expect from a naive screenshot-action loop with no error recovery.

    Looking at the cookbook repo — are you doing any kind of structured DOM extraction before passing to the model, or is this pure vision? Curious whether the hard-task performance comes from better perception, better planning, or better recovery when an action doesn't produce the expected state change.

  • Skyzzd5 hours ago
    I didn't expect to be able to verify every result in the spreadsheet myself. love this! I'll review the data and let you know if a particular run's success seems to be due to luck or if the judge might have made a mistake.
  • zkitty5 hours ago
    Look at Browser Use. They self-reported 89% on WebVoyager. On hard tasks with a real benchmark, they score 8.1%. That's not a performance drop….. that's a different product than what's being advertised.
    • agenticagent5 hours ago
      To be fair, this isn't just a Browser Use problem. Look at the drop-off for every agent as tasks get harder:

      Operator goes from 83% easy → 43% hard. That's a 40-point cliff.

      Claude Computer Use: 90% easy → 32% hard. 58-point drop.

      Browser Use: 55% easy → 8% hard. Just falls off a cliff entirely.

      TinyFish: 97.5% easy → 81.9% hard. 15-point drop.

      The gap between easy and hard is where you see if a system actually works or if it's just good at simple tasks. Every other agent loses half its ability or more when tasks get complex. We lose 15 points.

      That's the difference between "cool demo" and "I can actually ship this."

  • ivywho5 hours ago
    Interesting that every agent basically falls off a cliff on hard tasks except this one. Operator going from 83% to 43% is wild - that means it's literally coin-flipping on anything non-trivial.

    The failure traces being public is a nice touch. Looked through a few and they're actual failures, not cherry-picked easy ones. Most companies in this space wouldn't do that.

    Curious about latency though, what does a typical hard task execution look like in terms of wall clock time?

  • shubham_saboo5 hours ago
    Agreed! WebVoyager is not a real benchmark and it doesn't matter if someone saturates it
  • 5 hours ago
    undefined
  • houmercodes5 hours ago
    Genuine question about the eval methodology — how do you handle website non-determinism?

    A lot of these sites serve different layouts, A/B tests, cookie consent modals, etc. across sessions. Did you control for that across agents, or is each agent hitting the live site independently at different times?

    Because if so, some of the variance between agents could just be "Operator happened to get the GDPR popup and didn't know how to dismiss it." Would be useful to know if all agents were evaluated on the same snapshots or same time window.

  • toliveistobuild5 hours ago
    Browser-Use: 8.1% on hard tasks