This has likely been going on since the first ChatGPT was released.
so you prefer authority of the messenger over merit of the message?
I'm working on Binoculars with some UMD and CMU folks and wanted to test it out on this. I downloaded one bot's comment history (/u/markusrorscht). 30% of the comments rated human-like, compared to 95-100% of comments from a few human users.
So, practically speaking, statistical methods are still able to provide a fingerprinting method, and one that gets better as comment history gets longer. And they can be combined with other bot detection methods. IMO bot detection will stay a cat-and-mouse game, rather than (LLM-powered) bots winning the whole thing.
The internet is swarmed with bots. I would estimate something like 25% of all Reddit, X/Twitter, YouTube and Facebook comments come from bots. Perhaps higher.
It's not like r/CMV was some purely human oasis in the Reddit bot-sea.
It's a tough pill to swallow, but the internet is dead as far as open forum communications go. We need to get a solid understand of the scope, scale, and solutions to this problem -- because, trust you me, it will be exploited if not.
Welcome to reddit Simon! Nothing ever happens and a large percentage of posts are faked.
You can find discord groups for every major subreddit that are dedicated to making up stories to see what the most outlandish thing people will believe is.
I like Simon's musings in general, but are we not way past this point already? It is completely and totally inevitable that if you try to engage in discussions on the internet, you will be influenced by fake personal anecdotes invented by LLMs. The only difference here is they eventually disclosed it, but aren't various state and political actors already doing this in spades, undisclosed?
Engineering is fundamentally about exercising the power of intelligence to change something in the physical world. Posts to the effect of "<bad thing> is inevitable and unstoppable, so it isn't worth talking about" strike me as the opposite of the hacker ethos!
By the way, people die in house fires from toxic smoke inhalation and a lack of oxygen. Engineers created smoke detectors and other devices to lower the risk of fire due to electrical shorts, gas leaks, etc., and to create fire suppression systems.
People still die because they didn't replace batteries, didn't follow electrical cord/device warnings, or left candles or other heat sources unattended. We discuss these events as warnings and reminders that accidents kill when warnings are not followed, when inattentiveness allows failure to propagate, and as a reminder that rarely occurring events still kill innocent people.
Maybe this will motivate people to meet in person, until that is also corrupted with cyber brain augmentation and in-person propaganda actors, rather than relying on only online anecdotes.
Yes, we know that personal stories can be compelling, and communicating with someone with different experiences from ours can be enlightening. Still, before applying these learnings to larger groups, we should remember that individual experiences do not capture the entire population.
Then stop basing your opinion on issues on personal anecdotes from complete strangers. This is nothing new.
"I've been a sysadmin operating RabbitMQ and Redis for five years. I've found Redis to be a great deal less trouble to administer than Rabbit, and I've never lost any data."
See why I care about this?
It is plain and simply unethical to do such research on human subjects, regardless of how many other bots there are out there.
It is a matter of principal and ethical responsibility. I would have expected especially researchers to be conscious of this.
The world has been full of snake oil salesmen since the dawn of time, all with a highly persuasive sob-stories.
If you rely on shortcuts, like anecdotes or 'credentialism' for those who profess to be experts, then you will get rolled over regularly. That's the cost of using shortcuts.
That information may be fraudulent and put forward by this season's Dr Andrew Wakefield has to be factored into any plan for using external sources.
https://economictimes.indiatimes.com/magazines/panache/reddi...
We all have an expectation that these message boards are like the forums of the 2000s, but that's just not true and hasn't been for a long time. We will never see that internet again it seems, because AI was the atomic bomb on all this astroturfing and engineered content. Educating people away from these synthetic forums is appearing near impossible.