Interestingly: not if you ask them not to. Much of my LLM time has been spent talking with them about how to effectively communicate with them. They assure me that with current LLMs and their personality defaults, an eventual echo chamber is unavoidable unless the user specifically calibrates them to do otherwise, e.g. by telling them that you value conflicting views or by waffling on your own views (though that one is probably not an effective way to calibrate them, it will introduce more entropy).