1 pointby abeppu2 days ago1 comment
  • abeppu2 days ago
    > Instead, Tegmark sees child safety as the pressure point most likely to crack the current impasse. Indeed, the declaration calls for mandatory pre-deployment testing of AI products — particularly chatbots and companion apps aimed at younger users — covering risks including increased suicidal ideation, exacerbation of mental health conditions, and emotional manipulation.

    like, we should absolutely put in such policies but it's interesting to me that the presumption is that the welfare of children specifically should motivate pre-deployment testing. Surely adults also should not be emotionally manipulated or have mental health issues exacerbated or be driven towards suicide by AI?