1 pointby daureg5 hours ago1 comment
  • daureg5 hours ago
    The arxiv paper (https://arxiv.org/abs/2603.19220) was already submitted to HN (https://news.ycombinator.com/item?id=47530052) but it sounds like a nice local model:

    > Despite its compact size (30B MoE model with 3B activated parameters), its mathematical and coding reasoning performance approaches that of frontier open models. It is the second open-weight LLM, after DeepSeek-V3.2-Speciale-671B-A37B, to achieve Gold Medal-level performance in the 2025 International Mathematical Olympiad (IMO), the International Olympiad in Informatics (IOI), and the ICPC World Finals.