This is a very important question. I think there is an answer in the incentives of the platform but will have to bear out in practice. The platform incentivizes the development of an argument. Coming up with an argument is already a difficult exercise because of the structural demands of consistency and plausibility inherent in it (something in stark contrast to the ease of faking a fact). Incentivizing an elaborative structure makes it much harder by preventing inferential sleights of hand (bad-faith use) by forcing those inferences to be explicit and open to scrutiny. The most obvious cases should be detectable with LLMs.
Another important defense is that this platform is going to be built around open discussion. I am not trying to prevent being wrong, just to provide a place on average can correct them, even if the participants are not always operating in good-faith.