We assess that Claude Mythos Preview does not cross the automated AI-R&D capability threshold. We hold this with less confidence than for any prior model. The most significant factor in this determination is that we have been using it extensively in the course of our day-to-day work and exploring where it can automate such work, and it does not seem close to being able to substitute for Research Scientists and Research Engineers, especially relatively senior ones. Although we believe this is an informed determination, it is inherently difficult to make its basis legible, given the model’s very strong performance at tasks that are well-defined and verifiable enough to serve as formal evaluations.
The ECI slope-ratio measurement we introduce in section 2.3.6 shows an upward bend in the capability trajectory at this model, though the degree of the upward bend varies significantly across dataset and methodological changes we made to stress-test it. The identifiable driver traces to specific human research advances made without meaningful assistance from the models then available. That said, we will be continuing to monitor this trend to see whether acceleration continues, especially if this is plausibly traceable to AI’s own contributions.
What's the "automated AI-R&D capability threshold"? Anthropic has defined a danger line: if an AI can independently do the work of AI researchers, that's a big deal — because then AI could start improving itself without humans in the loop. This assessment is asking: has this model crossed that line?
Why are they less confident than usual? With past models, the answer was a comfortable "no." This time, they're saying "no, but..." — it's a much closer call. They're hedging.
That's how they reached this conclusion.