But the part I find more interesting is the second fear – not that AI fails visibly, but that it succeeds visibly. A system that genuinely compresses legal review time or automates reporting doesn't just create efficiency. It makes existing roles and empires contestable. And that's when the corporate immune system kicks in: it won't be compatible, it can't be done here, it will take years, it's probably illegal, who takes responsibility.
The result is what I call the controlled mediocrity trap – organisations deploying AI that's good enough to avoid embarrassment but not so effective that it threatens anyone. 74% of CIOs in Dataiku's 2026 survey regretted at least one major AI platform decision. I think a lot of that regret is political, not technical.
I suspect this is more common than people admit...
Worth fixing if you can. I read the headline several times and didn't get it.
It may technically be a "direct" judgement... except it's judging a very indirect artifact, one that likely has absolutely no review or validation by any human with any domain-knowledge or involvement.
We should be careful to distinguish between the capabilities it actually gives people versus the capabilities they assume they've obtained.
> That objection misses the political point. Governance action is triggered by confidence, not by methodological purity. The board does not need a technically valid comparison to demand an explanation. It only needs a comparison that feels legible.
Right, whether that "directly evaluated summary" is itself correct matters less than we'd hope: Its mere presence can affect what happens.
"Yes Minister" is a classic British comedy series, and in it certain manipulative bureaucratic characters are very aware of this phenomenon, cynically penning or revealing "reports" and "cautions" in order to achieve a certain policy outcome.