Astroturfing is the practice of creating a fake "grassroots" movement to make it look like a cause, product, or candidate has widespread public support when they actually do not.
Where would one find some reddit users willing to do such reviews, by the way?
They're buying stolen Reddit accounts and spamming over 500 videos a day to various subreddits.
They're also advertising fake "unlimited" plans. Their reseller pricing (they're a reseller) is 1/10th the upstream API pricing, so they're metering and throttling and banning users that cost them money.
They're getting thousands of people to subscribe to $1800 "18 month" plans.
Their unofficial subreddit is full of complaints. Probably a dozen complaint threads a day now.
Highly unethical company.
I wish there were laws that required large social media sites to publish data to their end users that indicate the severity of the problem.
(Also it's the kind of website where you absolutely can get good responses from "Show HN: A thing you might want to use and here's how much profit I'm making from it already" until a bunch of green usernames say nice things about it)
It's also the flip side of people feeling free to say what they want under the cover of (pseudo) anonymity.
I wonder if one solution is to partition the web into places where anonymity isn't possible, and places where it is.
1. I am one of the named, publicly accountable people registered as participating in this thing, and the same as posted under this pseudonym yesterday
2. Provided I'm reasonably careful, you can't tell which one is me unless n of m participants agree to unmask me.
3. I can only post under one name at a time. I can change pseudonym, but then my old one is marked as abandoned, so I can't trivially fake conversations with myself.
Doesn't that then require a centralised ( or a hierarchy of centralized ) authority to manage point 3?
Who would that be? ( each country issuing it's own citizens IDs? )
If the solution requires you to keep a private key private ( to prove who you are ) - how is the average person going to do that?
How are you going to build in your cryptography into all those different systems?
All you need is one link between that pseudonym and some identifying info - like IP address or a payment - and it's all gone, and you've already built a perfect system for government tracking.
So even if you have built all that successfully I'd still suggest the world would split into sites that would use it and sites that wouldn't.
> Doesn't that then require a centralised ( or a hierarchy of centralized ) authority to manage point 3?
I'm not sure. I think it might be possible. Lots of wild things are possible with cryptographic primitives.
Even if a centralized authority is necessary for coordination (which seems inevitable anyway, the forum has to be hosted somewhere), if the authority could blind itself about who was who, that would be valuable.
Yes, any system which gives users the option to hard-ID themselves - which I think is a desirable feature, and probably necessary to prevent spam/sabotage - needs to piggyback on something existing, like a government ID scheme of some sort.
> If the solution requires you to keep a private key private ( to prove who you are ) - how is the average person going to do that?
Same way they keep their cryptographic keys secret today, their software does it for them with very little conscious effort. And yes, people can get hacked etc. I'm not proposing something that would solve all problems once and for all, just something that would be useful.
> All you need is one link between that pseudonym and some identifying info - like IP address or a payment - and it's all gone,
Yes, people would have to still be careful to not ID themselves accidentally. But worst case, we're back to more or less the kind of forum we're in right now with respect to identifiability (and probably still better when it comes to astroturfing).
> So even if you have built all that successfully I'd still suggest the world would split into sites that would use it and sites that wouldn't.
That's OK. I imagine lots of different "forums" or "chatrooms" with the feature that you know the list of participants, but you don't know which nym is which particpant.
Isn't it much simplier to have sites which link to real ID's and sites which implement their own throwaway pseudo-anonymity - and all we need is it being clear when real ID's are being used?
Is there really any benefit to conflating the two?
The people who frequent this forum think they are immune to astroturfing because they all work in ad tech.
It's exhausting, especially since people will write out real advice and corrections about how to deal with rats, bedbugs, neighborhoods, etc. and it all goes into the ether in hopes someone will get scammed. Or maybe it's an SEO thing because the site name is so generic it's un-googleable. I hope it doesn't work.
I used to co-work next to a SEO specialist back in my freelance days and he would offer rankings, but the client would not be told that they were getting said rankings by blackhat SEO tactics (that mostly no longer work).
It's all so obvious and standardised that I have to imagine it is part of a toolkit or framework marketers are using without much thought.
[1] https://arstechnica.com/information-technology/2012/06/reddi...
There's obviously a massive difference between using sockpuppet accounts to:
* Influence perception on a social media platform as a 3rd party
vs.
* Put content on a social media platform that users are looking for so they return to the platform
It doesn't matter who shares a story with you on social media if the goal is to entertain, but it does matter if the goal is to get you to do something [spend money on their courses]
So you could clearly tell if people liked or didn't like something.
4 lines of code could catch this.
And now Reddit has made it possible to hide your post history.
Probably because of this exact issue.
That's using reddit's own site, of course there are other methods like Google dorks.
There's the classic search "hack" of adding site:reddit.com to any product recommendation search, to find "real" recommendations.
Most of the time this is going to find 5-10 posts, each with only a dozen comments and a dozen up-votes. And yet it feels do much more real than whatever at the top of Google that many people will trust these reviews.
This AI cheating app is currently #8 for "education" in the iOS app store.
Where are they saying that?
Also what is the second "conclusion" screenshot from? (Who is the "Matthew" and what analysis, mentioned in that screenshot?)
YC is full of scams.
There is a line between fake it till you make it and fraud.
I thought that was the dictionary definition of social media? If it isn't yet, it should be, Reddit is just the tip of the iceberg.
I mean I am shocked that this post didn't get flagged immediately ofc.
you ever notice how most YC announcements have comments disabled?
Actual YC announcements do not have comments disabled.
We do tend to be more lenient when there's no evidence of organized manipulation, just friends/fans/users trying to be helpful and not realizing that it's actually unhelpful. What dedicated HN users tend not to realize is that such casual commenters usually have no idea how HN is supposed to work.
But this leniency isn't YC-specific. We're actually less lax when it comes to YC startups, for several reasons.
I'm not going to out people here but maybe it helps you to know that not everyone plays by the rules. Tbf I also understand that this is just really hard to enforce.
But as noted by freehorse, dang has stated it multiple times and I personally have not seen any threads memoryholed and would call out YC if they were.
Healthy skepticism plus the maturing industry of online propaganda and persuasion campaigns is where I would put Occam's razor a la "minimal assumptions". Every social media site has been manipulated at all levels, moderation notwithstanding, I see no reason to believe HN is immune to this.
It is not just a question of economics for YC to allow and even administrate this kind of manipulation, but of second- and third-order goals like consent/consensus manufacturing, reputation-building, shoring up investments by building "viral" interest, etc. These are immediate logical deductions from the patterns of behavior by humans and the bots that imitate them that are present everywhere on the internet these days.
But talking about repressive behaviour by mods against YC-related criticism specifically, I do not see that. I understand the prior would be that a popular forum run by YC would want to protect and censor in order to protect interests of the companies they back. I also had that prior. However, this is not "occam's razor", it is a prior. The time I have been around here I have not noticed this kind of behaviour happening though, while I have definitely noticed other kinds of stuff getting repressed, meaning that it is less likely that such repression would go unnoticed all the time. Thus I adjusted my understanding accordingly by shifting the prior according the data. If you find different examples I am willing to take them into account.