"This is the contribution we want to highlight: interpretability as a tool for hypothesis triage. Foundation models learn from data at a scale humans can't match. If we can inspect what they've learned we can use that to guide experimental priorities, turning AI models into a source of testable hypotheses rather than a black box."