Like, this attempt† where the bot then attempted to lecture users who were hostile towards it before it was eventually banned.
AI generated articles are, on the balance, inferior, except for people that want simple, low quality content.
But LLMs are moving up the value chain with Deep Research. They can give explanations tuned to a reader's knowledge/viewpoints and provide interactive content Wikipedia doesn't support. That is a killer app for math/science topics.
Wikipedia will win against a generic corporate encyclopedia on neutrality/oversight, but it'll lose badly on UX, which is what matters.
I think the tipping point will be direct integration of academic sources into ChatGPT/Claude/Gemini and a "WikiLink" type way to discover interesting follow-up topics.
I can't trust AI answers for serious historical or social science topics because of the first. And generally my chat with AI ends once I get the answer I need because I can't get rabbitholed into other topics.
I might be slightly wrong, but probably not by a lot, yet. Sure there's an element of "holding-it-wrong-ism" in my position. But ... it does actually take practice to get it right, and best practices are badly documented!
That said the situation is changing rapidly: https://news.ycombinator.com/item?id=47547849 "AI bug reports went from junk to legit overnight, says Linux kernel czar"
--
of course they banned ai they could barely allow css
Can't wait for the 80 page Talk threads.
It seems a smaller "win" than most think. Just discourages wholesale rewriting and creation of new articles using AI. Assistance with editing is explicitly allowed.