Key highlights from the blog post (
https://www.kimi.com/blog/kimi-k2-5.html):
- Agent Swarm: K2.5 can spawn up to 100 sub-agents autonomously, executing 1,500+ parallel tool calls with 4.5x speedup over single-agent.
- Video-to-code: Generate frontend code directly from a screen recording, with autonomous visual debugging (it "looks" at its own output and iterates).
- Open-source model weights on HuggingFace, plus a CLI tool (Kimi Code) for VSCode/Cursor/Zed.
Curious to see benchmarks against Claude Opus or GPT-5.2 on real-world agentic tasks.