The article examines the structural asymmetry between large-scale model developers and downstream deployers, particularly in light of the AI Act’s documentation and accountability requirements. It explores whether a publicly guaranteed baseline model could reduce systemic risk, improve auditability, and strengthen competitive neutrality within the European AI ecosystem.
The objective is not to advocate for a predetermined institutional outcome, but to open a debate on governance design, incentive structures for data contribution, and the alignment between legal responsibility and epistemic capacity.
Feedback from technical, legal, economic, and policy perspectives would be very welcome.
Even if providers comply with documentation requirements under the EU AI Act, downstream deployers still can’t realistically audit a model’s behavior at the level of training data or causal reasoning.
Curious how ML practitioners here think about this.
Is this asymmetry something that can realistically be reduced with interpretability / mechanistic transparency research, or is it fundamentally structural for large-scale models?