1 pointby businessmatea month ago2 comments
  • businessmatea month ago
    This paper presents two realistic 2026 case studies involving AI-mediated representations: one in financial services product communication and one in healthcare symptom triage. In both cases, harm arises not from overt malfunction but from reasonable-sounding language, normative framing, and omission of material context. Internal model reasoning remains inaccessible, yet responsibility attaches to the deploying organization.
    • chrisjja month ago
      > harm arises not from overt malfunction

      This of course is only because there can be no "malfunction" in a system that carries the qualifier "AI can makes mistakes".

  • MORPHOICESa month ago
    [dead]