It is interesting to note that the majority of respondents in this survey want 'the system' to challenge existing views and inaccurate information, while the survey clearly concludes that chatbots have the potential to be 'bubble builders'. It suggests that as AI becomes a companion and 'emotional infrastructure', the likelihood of it acting as a neutral or challenging arbiter of truth decreases, as intimacy relies on validation rather than correction. Maybe that's the next step for the companies behind the broadly used models, to set the interface up in a way that helps the users to get what they actually want from interacting with them. You can easily instruct i.e. Gemini yourself to behave as a 'wise mentor' that guides you to see broader perspectives and correct false data, however, I suspect most people will not configure their bots at all,