3 pointsby dijksterhuis8 hours ago2 comments
  • dijksterhuis8 hours ago
    paper link

    > Training language models to be warm can reduce accuracy and increase sycophancy

    https://www.nature.com/articles/s41586-026-10410-0

    selective snippet from abstract

    > Warm models showed substantially higher error rates (+10 to +30 percentage points) than their original counterparts, promoting conspiracy theories, providing inaccurate factual information and offering incorrect medical advice. They were also significantly more likely to validate incorrect user beliefs, particularly when user messages expressed feelings of sadness.

  • pleshkov8 hours ago
    [dead]