- How will they survive?
- What happens with all those unsolved problems, those that AIs haven't found a source with the solution to scrape from?
Now, the rare times I've asked a question, I've violate some rule I've forgotten. Or that's new (they're SO AI allergic).
Reddit, interestingly, has continued to thrive. I think in party because its still largely fun and there are literally millions of communities.
Stackoverflow once benefited from its pedantry, now there's not much value in such pedantry, and its just not a fun place to hang out.
It would be better if closing questions would cost 1000 reputation. That's one advantage AI has over it - it will at least try to answer your question every time and not just randomly shut you down for its own (wrong) reasons.
Unrelated to AI, I haven't really had a positive experience on StackOverflow in 7+ years. The way they aggressively close questions as duplicates despite the previous questions having incomplete or outdated answers was already making it a much less useful site.
First, a disclaimer: I do not speak for other elected moderators, nor for Stack Exchange Inc. My views are my own.
Site traffic has been declining for a long time and that's not a mystery. Empirically, the rise of Large Language Models has sped up the decline or at least did not reverse the trend. This is both good and bad. Good because LLM's capture the vast majority of questions that would be quickly closed — underspecified, unclear, duplicate, not actually about programming, and so on. This increases the signal-to-noise ratio and average usefulness of SO questions. Bad because of all the implications of declining traffic that everyone can imagine.
Is it dead? No. In fact, SO has a great opportunity to specialize in answering questions that LLMs cannot answer (bleeding edge technology, complex debugging problems, emerging issues, you name it.) The community is still alive. Whether Stack Exchange Inc. is able to understand and adapt to this shift is unclear.
Company aside, the biggest challenge we moderators are facing is to identify and remove LLM-generated content. Keeping SO free of LLM-generated content is critical in helping the site maintain an edge against AI tools that provide quick and confident-sounding answers to whatever problem you throw at them. It's an uphill battle though, and one that is probably unwinnable, but the community hasn't given up yet.
There is also a higher tolerance for newbie questions and duplicates
(My experience here was with Kagi, I’m not sure how Google and others are with this)