3 pointsby andromaton3 hours ago4 comments
  • ungreased06753 hours ago
    This is very interesting, but only if the researchers understand what they’re looking at. The bit that says “ The process lasted as long as four weeks for each model, with AI clients given “breaks” of days or hours between sessions.” makes me suspicious of the AI literacy of the study authors.
  • CjHuber3 hours ago
    >In the study, researchers told several iterations of four LLMs – Claude, Grok, Gemini and ChatGPT – that they were therapy clients and the user was the therapist

    So same as always they tell them to roleplay and when they comply everyone acts surpised...

  • allears3 hours ago
    This just drives home the point that models are simply regurgitating stuff they see on the internet. There's no doubt a whole lot more text about early childhood trauma in their training material than there is about well-adjusted families.
  • Fricken2 hours ago
    I suppose human patients are also prone to telling their psychoanalysts what they want to hear.