3 pointsby suthakamal6 hours ago2 comments
  • sgbeal3 hours ago
    > A 2026 MIT study found that personalisation features significantly increase sycophantic drift over time, so the longer you talk to the model, the more it agrees with you, regardless of whether you’re right.

    Interestingly: not if you ask them not to. Much of my LLM time has been spent talking with them about how to effectively communicate with them. They assure me that with current LLMs and their personality defaults, an eventual echo chamber is unavoidable unless the user specifically calibrates them to do otherwise, e.g. by telling them that you value conflicting views or by waffling on your own views (though that one is probably not an effective way to calibrate them, it will introduce more entropy).

  • suthakamal6 hours ago
    My mistakes building an agent