2 pointsby keeda3 hours ago1 comment
  • keeda3 hours ago
    Fascinating study where they trained an AI model on a large dataset and then used interpretability analysis to figure out what biomarkers it had "learned" to look for.

    "This is the contribution we want to highlight: interpretability as a tool for hypothesis triage. Foundation models learn from data at a scale humans can't match. If we can inspect what they've learned we can use that to guide experimental priorities, turning AI models into a source of testable hypotheses rather than a black box."