Fir example, a correct grand unified theory isn't useful if you don't know the physics to understand it.
Humans don’t process data directly — we process compressed representations. So a meaningful scale would measure:
1- Throughput — how much structured data an agent can analyze per unit time.
2- Compression efficiency — how much insight is extracted per unit of data.
3- Relational depth — how many meaningful relationships can be modeled simultaneously.
Tools like Agentic Runtimes + GraphRAG don’t just increase data volume access — they expand relational modeling capacity and contextual memory. In that sense, they move users up a scale of informational leverage, not just scale of data.
Agree with the measures; follow-up question: what's the insight definition? I think exposing some of those measures would help people better understand what the analysis covered, in other words, how much data was actually analyzed. Maybe an additional measure is some kind of breadth (I guess it could be derived from the throughput).
"Informational leverage" reminded me of "retrieval leverage" because yeah, the scale of data didn't change, the ability to extract insights did :D