93 pointsby publicmatt6 hours ago8 comments
  • ahmadyan3 hours ago
    Claims in the article are incorrect. They conveniently ignore Meta CWM models, which are open-sourced [1] and open-weight [2] and are at 65% SWE-bench verified (with TTS) and 54% pass@1 and the same size (32B dense). So claims like "surpassing prior open-source state-of-the-art coding models of comparable sizes and context lengths" and conveniently leaving out the previous OSS SOTA out of your eval tables are ... sketch.

    [1]https://github.com/facebookresearch/cwm [2]https://huggingface.co/facebook/cwm

    • ethan_l_shen3 hours ago
      Hey! These are great observations. So first, while TTS can improve performance, we wanted to evaluate the raw capability of our model. This meant generating only one rollout per evaluation instance, which follows other papers in the space like SWE-smith and BugPilot. In addition, TTS adds extra inference cost and is reliant on how rollouts are ranked, two confounding factors for deployable models where memory and inference speed are extremely important.

      Following that line of reasoning, context length is another very large confounding factor. Longer context lengths improve performance - but also result in enormous increases in KV cache size and memory requirements. We decide to control for this in our paper and focus at the 32K context length for 32B size models, a context length that already pushes the bounds of what can be "deployable" locally.

      Still, we evaluate at 64K context length using YARN and are able to outperform CWM's 54% performance (non TTS), which it achieves using 128K context, a substantial increase over what we use. This is also pretty significant because we only ever train at 32K context, but CWM trains for a full 128K.

    • philipkglass3 hours ago
      The difference is that the Allen Institute models have open training data, not just open code and weights. Meta doesn't share the training data you would need to reproduce their final models. For many uses open-weight models are nearly as good, but for advancing research it's much better to have everything in the open.
      • kevmo3143 hours ago
        Reading their paper, it wasn't trained from scratch, it's a fine tune of a Qwen3-32B model. I think this approach is correct, but it does mean that only a subset of the training data is really open.
    • mhitza3 hours ago
      The linked open weight disallows commercial, and is only licensed for research purpose
  • janmue40 minutes ago
    “Strong closed-weight coding agents like Devstral Small 2 are an important point of comparison.”

    Devstral Small 2 is an open-weights model: https://huggingface.co/mistralai/Devstral-Small-2-24B-Instru...

  • ripped_britchesan hour ago
    One claim in article is definitely very wrong or at least needs to be narrowed. Claude is the only closed agent harness and there are about two dozen open ones. Many models may be closed, but when people say agent they are generally referring to the harness, not the underlying model.
  • augusteo3 hours ago
    The ahmadyan comparison is fair. Meta's CWM models hitting 65% vs SERA's 54% is a meaningful gap.

    But the interesting number here isn't accuracy. It's the $400 to reproduce top open-source performance. That's the part that matters for teams building internal tooling.

    We've been running agents on proprietary codebases at work. The pain isn't model quality. It's customization. Most off-the-shelf agents don't understand your repo structure, your conventions, your test patterns. If you can fine-tune a 32B model on your own codebase for a few hundred dollars, that changes the economics completely.

    But codebases changes everyday, so finetuning will have to be continuously done!

    Probably not worth it versus something like Claude Code.

    Curious whether anyone's tried this on non-Python codebases. Most SWE-Bench stuff is Python-heavy.

    • storystarling3 hours ago
      The fine-tuning overhead is definitely a factor, but for smaller shops the hard constraint is usually inference VRAM. Running a 32B model locally or on a rented GPU is surprisingly expensive if you aren't saturating it. Even at 4-bit quantization you are looking at dual 3090s or an A6000 to get decent tokens per second. The $400 training cost is impressive but the hosting bill is what actually kills the margin compared to per-token APIs.
  • nickandbro3 hours ago
    Great work! Really respect AI2. they open source everything. The model, the weights, the training pipeline, inference stack, and corpus
  • khimaros3 hours ago
    it's great to see this kind of progress in reproducible weights, but color me confused. this claims to be better and smaller than Devstral-Small-2-24B, while clocking in at 32B (larger) and scoring more poorly?
    • ethan_l_shen3 hours ago
      Hey! We are able to outperform Devstral-Small-2-24B when specializing on repositories, and come well within the range of uncertainty with our best SERA-32B model. That being said, our model is a bit larger than Devstral 24B. Could you point out what in the paper gave the impression that we were smaller? If theres something unclear we would love to revise
      • khimaros2 hours ago
        "SERA-32B is the first model in Ai2's Open Coding Agents series. It is a state-of-the-art open-source coding agent that achieves 49.5% on SWE-bench Verified, matching the performance of much larger models like Devstral-Small-2 (24B)" from https://huggingface.co/allenai/SERA-32B
        • ethan_l_shen2 hours ago
          Ah great catch I don't know how we missed that. Thanks! Will fix.
  • Imustaskforhelp3 hours ago
    Hey this looks great? Is it available on Openrouter.

    I wish if AI2 could release a more denser model on Openrouter for free than the 8B model as I was using Devstral model for agentic purposes.

    If we can get an agentic good 32B like model on openrouter for ~free, then I feel like it will be very interesting to see how things would go imo.

    Good luck with AI2! The premise of truly open source models is really interesting and I feel like it could help bring more innovation in the space imo!

  • jauntywundrkind4 hours ago
    Awesome stuff. Output speed looks crazy fast too.

    I wonder if this indeed will start prompting more language specific work.

    Afaik training still requires not just looking at sample code but also being able to write loss functions being able to have problems the AI can work at. That seems hard.

    One random thought, are there training styles of just deleting some code from "good" projects then making the AI make it work again?

    • CuriouslyC3 hours ago
      The technique people use is to capture PR diffs from public repos and extract the tests then use that to see if agents can reconstruct the patch that satisfies the tests.