The sharper entry point is Essay II, which diagnoses why pharma produces diseconomies of scale despite massive compute investment — an N-of-1 economy where every organization rebuilds the same data infrastructure from scratch, and vendors profit from keeping it fragmented: https://unvarnishedgrady.substack.com/p/from-first-principle...
This final essay asks whether that failure mode generalizes. The claim: you cannot align an AI agent if the representational substrate beneath it is lawless. Curious whether people building agentic systems in production are seeing this, and whether anyone has found counter-examples where context-free architectures still allowed intelligence to compound.