There's still a tiny window of opportunity for engineers to come up with or design technical safeguards, but eventually this problem will move past the realm of what's easily solvable and out of our hands, and into policy makers hands. A big part of me feels like that window is already slammed shut.
It's hard to distinguish who's a bot, who's a narrative pusher and who's an enthusiast. Which is exactly what you'd want from an astroturfing campaign. There's a clear benefit: people in the industry are reading this, and in doing so they're granting mindshare.
There's one way that can prevent inauthentic support campaigns - personal key signature. But judging by how afraid people, especially in the US, need to be of their government surveilling them, this isn't going to catch on.
This phenomenon appears to be incrementally coming for every single topic and public platform.
I literally ask it to look for something, and immediately afterwards (before reading the long-winded result), ask it if the results were real or fabricated. It's just how the cost-benefit analysis works out, and I didn't learn until a ton of times reading the results, getting suspicious of a few, doing websearches to verify them, not finding them, then coming back to ask if they were real.
"Sorry! It's absolutely fair that you called me out on that... It's important that you hold me to a high standard... You're absolutely right..."
I'm finding it valuable for compressing all of the docs in the world, so I don't have to look up what a function does or how to accomplish something in some framework or CLI. I find it capable of writing code if I move an inch at a time; build copious verbose debugging output that I feed back into it every time it screws up; and when it goes into a stupid loop being stupid, just debugging by hand before wasting hours trying to get it to see something that it doesn't want to see.
You would be surprised at how cheaply opinions can be purchased, especially globally.
Isn't this what exactly you'd expect in a connected world? The best arguments from both sides proliferate, thereby causing "The same arguments and tropes are echoing through every thread".
I would expect a figurative war for human attention. With so much information being available, everyone would try to make people focus on what they want to communicate.
> The best arguments
Some of these tropes and arguments aren't really the best. There's a lot of rhetorical gotchas, e.g. "that's exactly what I'd expect from a human" when an automated solution isn't up to par.
> from both sides
The only real "side" is the one actively pushing for something. Everyone else isn't a camp - they're just random people.
How does this relate to online commenting? Are you expecting the "figurative war for human attention" to make comments more diverse?
>Some of these tropes and arguments aren't really the best. There's a lot of rhetorical gotchas, e.g. "that's exactly what I'd expect from a human" when an automated solution isn't up to par.
I think you're overestimating the epistemic rigor of the average internet commenter, eternal September, etc.
>The only real "side" is the one actively pushing for something
Are you implying the "astroturfing" is only on one side? If you might just be experiencing motivated reasoning and/or confirmation bias. Most of the astroturfing behavior can be applied to the anti-AI side as well, eg. people complaining about electricity or water consumption in every thread about the impacts of AI, or "ai slop".
A viable strategy is to disseminate messaging reinforcing a belief beneficial for the disseminating entity, in a way that invokes emotion (like fear or anger), especially in influential spaces allowing for anonymity.
But in general this line of questioning won't lead to a satisfying conclusion. The assumptions you requested (connected world) aren't specific enough to determine what we should expect from comments in online spaces (and by extension, to demonstrate that the current situation is a natural outcome).
> I think you're overestimating the epistemic rigor of the average internet commenter, eternal September, etc.
Yeah, but this place, quite frankly, is above average. Not to the point of being immune to manipulation, obviously.
> Are you implying the "astroturfing" is only on one side?
No. I'm pointing out the "two sides" framing that you insist on is a mistake. There is only one organized camp with a clear financial incentive to have people believe in "Autonomous Coding Agents" which justifies capital investements in that area.
People who are concerned about power consumption, people who don't like hardware unavailability, and people who think that LLMs are useful tools, but they're not even close to autonomous delivery of software systems are all distinct groups without financial incentives. But they do have the right to push back against the relentless messaging barrage from the camp.
Need to double check what is available, though I feel like that angle could work.
I’ve been wondering also if a simple lie & deception detection type system could be a useful angles. It’s complicated in practice; though the human intuition would say it’s figured this out millennia ago- I can’t tell you how many times my body has figured out someone’s toxic negative vibe by feeling. And I think we probably understand this better than we think and can represent it in the computer space with analysis of signals and some follow on questions. Hope I’m not too naive here.
[0] e.g. https://www.businessinsider.com/sam-altman-tools-for-humanit... and the feature piece at https://time.com/7288387/sam-altman-orb-tools-for-humanity/
I was surveilled, experimented on and followed by them for being American-Pakistani and speaking out against them from 2022-2023. It was a scary time and I wish I were making this up. I wonder sometimes if they really are the good guys, and I just got things backwards. I also heard when you are kidnapped and in hostile territories for long enough, you fall in love with the kidnappers.
Happy to share more details if anyone’s curious.
(It's interesting that conservatives saw it as a partisan cause.)
It's against the HN guidelines to insinuate that astroturfing happens on HN.
I found it amazing that I could not find any organisation that tracks these campaigns. These are often very well funded and those funds go to people.
Part of the problem is a successful public opinion campaign results in something that most people believe, we probably only get to see the failures. Challenging something that is widely held is not well received, whether or not you are right or wrong.
Some things I did find out. Fake news stories don't change people's opinions very much. They enable media to shape narratives because people will reject genuine stories outside the narrative because they know that fake news stories exist. Fake news exists to be seen as fake to establish that the things you disagree with could also be fake.
There are companies that specialise in this.
Reputation management companies might tell you who their clients are or what they do, but never at the same time. I suspect the best ones do neither.
Every propagandistic argument is going to be like that for 80% of people, and 40% of people are going to be within that 80% about 99% of the time. They think the biggest issue of our time is how much people complain.
their landing page stops short of saying that Doublespeed would be "a good fit for your political campaign." I'd prefer fighting an AI-powered drone over becoming a victim of "Dead Internet-aaS" startup. at least, flying lawnmowers are honest
People are just not ready for being skeptical against this, they've barely gotten used to phone scams and now they have astroturfing and deepfakes to contend with.
I've got a couple of them here mostly smaller ones (3", 5") and one large one (9") and I'm super respectful of those props and the ones here obviously don't have any kind of payload. Even so I'd hate to see one come at me, I've seen what they can do to pieces of hardwood.
Good for you.
> although I'd argue that nowadays, the line between a civilian and a combatant is blurred more than ever. at least in Ukraine it's like that
This is very true and very few people outside of Ukraine realize this.
I think everyone would agree with this but is there any formal evidence of how Twitter and TikTok affect elections or legislation?