1. Random website
2. Random thematic blog, seo'd
3. Random forum
4. Curated question portal like SO
5. Thematic subreddit when it became impossible to ask on SO
6. LLM
Can you explain what you mean by that?
Technology changes at a fast pace .. so new questions would get asked, and then closed by moderators and pointed to similar questions that might be 5 or 6 years old and no longer relevant.. essentially ending the discussion on many topics and actively preventing progress in certain areas.
At some moment it just stopped making sense for me to ask questions on SO, because if you can google the answer then what's the point, but if you can't google the answer, then some angry competitive user is likely to close your question for some reason.
I can say extremely confidently from years of experience that the people who were always "trying to be the first to answer your question" were, overwhelmingly, the ones trying hardest not to let anyone ever close anything, even harder than the most aggrieved newbies asking questions and not caring about the underlying community. Nobody sits around answering multiple questions a day for years on end, purely on intrinsic motivation. I joined in late 2010 and posted answers all the way until mid-2023, but fully half of those were before the end of 2012. There are reasons for that. Meanwhile, there are people with reputation scores in the seven digits, even though the site awards no further privileges past 25,000. The obvious conclusion is that we're primarily talking about people primarily motivated by Number Go Up, and closing questions is an impediment to Number Go Up, so it must be prevented at all costs.
Questions get closed for the reasons that are listed in the interface for closing questions, which are also described in the Help Center and also explained in detail on meta (https://meta.stackoverflow.com/questions/417476). When questions don't meet the expected standards, it's important to close them as quickly as possible; because when people answer questions that should be closed, they are actively making the site worse (https://meta.stackoverflow.com/questions/429808). And since there will always be people around who are motivated by Number Go Up, there was value in preempting them.
Really the system was poorly designed. The Staging Ground was the one shining beacon of hope, because it inherently prevented answers by default, providing only a comment thread with the explicit purpose of fixing issues with the question so that it could meet site standards.
While AI definitely took away a lot of people from SO, most people are relieved that they don't have to interact with that literally garbage community when they have an IT/CS question. They didn't leave because of the website design, but I also believe that the new design wouldn't have chased many away either.
These users' rule hasn't really worked out so far, as demonstrated by the current state of SO. Maybe it is time to ignore that very small, but very vocal group? Though probably that should have happened years ago, maybe it's just time to cut their losses.
But most importantly: the site does not even remotely in any imaginable way empower those users. It actively hinders them, the staff have berated them over the years over alleged "unfriendliness"[2] and they constantly dump their ideas on the meta site, then ignore all feedback and push their ideas through anyway.
The main site is noticeably slower now. As far as I can tell, this is because they've taken code paths that the beta uses (especially whatever it is they use to load the code boxes and put a little JS widget at the top of them) and applied them back to the main site that worked perfectly well before.
[0] Basically: a magic robot that can listen to them and try to suggest an answer one on one, without caring about whether literally anyone else on the planet could benefit from that exchange, because the robot can tirelessly do that for the next person.
[1] I.e., so that humans could say something once and actually have it be relevant to many people.
[2] I.e.: the company makes money from views and a lot of people don't want to view a site where they have to actually meet any kind of standard whatsoever to participate; so it's the fault of people who have an actual vision for the site being useful.
This is frankly impossible. By default, anyone with an account can post an answer to a question (https://stackoverflow.com/help/privileges/create-posts).
Question protection sets a barrier at 10 reputation, which you can get from a single upvote on an existing answer, or from five approved edits to existing questions or answers. Further, it would only be applied to questions that had already been answered repeatedly, specifically because the question was attracting redundant answers that weren't adding any more value.
I imagine a huge number of people were just browsing for quick answers and then bailed as I did.
Every vote-up/vote-down community is retarded, including this one.
Hell, every community is retarded if you think about it.
The group never invents - and all that Steinbeckian stuff
The stackoverflow was better than what was before it (*sexchange site). Time will tell how it compares to the AI slop of the dead internet (current generation of LLMs based on human-generated data are great. Let’s see what happens when most of the data is created by bots).
Doubling down on how it was done in the 'good old days' probably wouldn't work because you would slowly bleed user to AI. Selling data to AI companies might work for a bit, but I would guess that the sales value of SO's data has quickly diminishing returns. So what is their path forward?
In an ideal world, LLMs would take all of the basic RTFM style questions, and leave SO for the harder, but still general enough to be applicable to others-questions. LLMs seem to be getting pretty good at those as well though, so I don't know where that leaves us.
SO for discussions of taste? I have these two options to build this, how should i approach this? They tried to sell their own GPT wrapper for a while, didn't they? The use case I can see for that is: User asks question - LLM answers it - user is unsure about the answer - it gets posted as a SO thread and the rest of the userbase can nitpick or correct the LLM response.
Edit: I also seem to remember they had a job portal in the sidebar for a while, what happened to that? Seems like a reasonable revenue stream that is also useful to users.
I think the deeper question is how SO would get paid for that.
Historically, SO has been funded by advertising. Users would google their question, land on SO, get an answer, and SO would get paid by advertisers. (The job portal was a variation on the advertising product.)
Even in your ideal world, newbies and experts would first ask their questions to an LLM. The LLM might search SO and find the answer there, but the user would get the answer without viewing an ad, so SO wouldn't get paid for that.
The same issue is facing Wikipedia. Wikipedia isn't funded by commercial advertisers, but they are funded by donations, which are driven by ads. If LLMs just answer the questions based on Wikipedia data, the user won't see the Wikipedia ad asking them to donate; they may not even know that Wikipedia was the source of the information, so they may not even develop a fondness for Wikipedia that's necessary to get users excited to donate.
This is why you see people shouting about how LLMs are "killing the web." I think it's more correct to say that LLMs are killing free web resources. Without advertising, not even donation-funded resources can remain available for free.
There are of course several huge issues with this, but thats why I prefaced it with ideal world hahaha
the biggest of which is why most users would want their questios publicized if the ChatGPT answer not on the stackoverflow platform will be enough or even better
Or how existing users and question-answering volunteers feel about just being cleanup and training data after LLMs
Or at least they should. I think too many people get into a routine of letting themselves get angry about the repetitiveness of the questions they're answering, and then somehow getting addicted to that.
Now that we have LLMs I don't need basic questions answered. I do still need hard questions answered by experts and AI has normalized paying money for QA.
I would definitely pay for a "human ChatGPT" service where the answers are written by experts who get paid per answer, e.g. grad students. Then they can resell this data to AI companies. Or maybe the economics are such that they can take enough money from AI companies to pay the experts and I don't need to pay anything at all.
This won't bring in as much money as advertising used to, but that business model is dead anyway. There's no future for a QA site at the low end.
and a change of form from (questions being asked primarily as a means to an end for one person) to (Q&A pairs being written as reference materials)
and requests for comment on which approach would be the most idiomatic or whether one has fallen into an XY trap or other things that rely on human 'taste' rather than LLMs' blithe march of obedience
As a data source for LLMs, and by becoming the place someone goes where ChatGPT can't produce a sufficient answer.
To me, it looks like more like Digg from the old days.
There's a big chance SO is used more by AIs than real humans, nowadays.
Out with the old, in with the new.
There is no utility it fulfills except as a watering hole for those unfortunate souls who built their village there.
If for want of the Astronomy overflow and math overflow and others to remain I will not wish that StackOverflow go the route of Ask Jeeves (and wither away into irrelevance) but I'm hoping they take a look inside and see why they failed.