Can you elaborate?
In general most such claims today are without substance, as they are made without any real metrics, and the metrics we actually need we just don't have. I.e. we need to quantify the technical debt of LLM code, how often it has errors relative to human-written code, and how critical / costly those errors are in each case relative to the cost of developer wages, and also need to be clear if the LLM usage is just boilerplate / webshit vs. on legacy codebases involving non-trivial logic and/or context, and whether e.g. the velocity / usefulness of the LLM-generated code decreases as the codebase grows, and etc.
Otherwise, anyone can make vague claims that might even be in earnest, only to have e.g. studies show that in fact the productivity is reduced, despite the developer "feeling" faster. Vague claims are useless at this point without concrete measurements and numbers.
From the video summary itself:
> We’ll unpack why identical tools deliver ~0% lift in some orgs and 25%+ in others.
At https://youtu.be/JvosMkuNxF8?t=145 he says the median is 10% more productivity, and looking at the chart we can see a 19% increase for the top teams (from July 2025).
The paper this is based on doesn't seem to be available which is frustrating though!
In any case, IMHO I think AI SWE has happened in 3 phases:
Pre-Sonnet 3.7 (Feb 2025): Autocomplete worked.
Sonnet 3.7 to Codex 5.2/Opus 4.5 (Feb 2025-Nov 2025): Agentic coding started working, depending on your problem space, ambition and the model you chose
Post Opus 4.5 (Nov 2025): Agentic coding works in most circumstances
This study was published July 2025. For most of the study timeframe it isn't surprising to me that it was more trouble than it was worth.
But it's different now, so I'm not sure the conclusions are particularly relevant anymore.
As DHH pointed out: AI models are now good enough.
I asked for evidence, you are replying to something else.