They also have a podcast - basically the top stories they publish in a conversational format: https://www.404media.co/the-404-media-podcast/
They were having trouble with getting scraped. The email sign-in is enough of a barrier to stop most scrapers. https://www.404media.co/why-404-media-needs-your-email-addre...
* Government to Name ‘Key Witness’ Who Provided FBI With Backdoored Encrypted Chat App Anom
* Secret Service Admits It Didn’t Check if People Really Consented to Being Tracked
* Telegram Hands U.S. Authorities Data on Thousands of Users
* DHS Says China, Russia, Iran, and Israel Are Spying on People in US with SS7
* Congress Pushes Apple to Remove Deepfake Apps After 404 Media Investigation
* Hackers Claim Massive Breach of Location Data Giant, Threaten to Leak Data
the reason I paid is they kept coming up over and over again and I want to support their model: https://www.cjr.org/business_of_news/404-media-and-the-hopes...
also traditional media are using them as a source, such as forbes: https://www.forbes.com/sites/davidphelan/2024/11/12/no-your-...
Sounds like a fairy tale, but it actually happened [2]. I suspect a human overreaction and then automated systems taking over. Anyway, eventually it was escalated really high up and resolved. I can't share any more details unfortunately, even though I'd like to write more about this.
[1] English translation: https://cert.pl/en/posts/2024/12/Ad-fraud-on-large-online-pl...
[2] Our reaction post about this, in Polish: https://cert.pl/posts/2024/11/blokada-na-meta-stanowisko/.
I would suspect something a bit more sinister: the very organizations you are targeting probably started a mass campaign to "report" your content, which triggered automated systems that caused your posts to disappear. A more cynical me would add in the fact that any manual review by Facebook staff would trigger even more aggressive moves to silence you, given that you're directly accusing them of inadequate moderation of their platform, which at best doesn't make them look good, and at worst, means they lose advertising $$ that those groups may be spending with Facebook to target those very victims.
The next time you hear them claiming to be a public square ask them if they're like to be regulated like a public utility. I think we all know how they'd answer that one.
Our government also claims to serve the people, while bending over to the first one who pays well.
FB, Google, YouTube, and other ad markets don't give a crap about any of that, and print money by not giving a crap. Most of the those newspapers are out of business now, and/or were bought by special interests. This appears to me to be a clear net loss for humanity.
Recommendation takes higher precedence over your network. And then nefarious actors take over the recommendations. That’s how these products die. The recommendation engine is unfortunately not able to distinguish between regular content and suggestive content, and no amount of reporting/blocking seems to change that. Plus almost all the reports are reviewed and are determined to have not broken any community rules.
The sad thing is that I've reported these posts and they always say it doesn't violate their terms.
I'll just quote an experiment conducted by my colleague, where they tracked the outright malicious (porn, malware or fraud) ads they reported from a regular user account: "from January to November 2024, we tested (...) 122 such malicious ads (...) in 106 cases (86.8%), the reports were closed with the status “We did not remove the ad”, in 10 cases the ad was removed, and in 6 cases, we did not receive any response". That's not very encouraging.
Nothing I reported on FB was ever removed, even obvious spam (e.g. comments in different language than the rest of the thread, posted as a reply to every top-level comment in the thread). I think this message is most likely just generated automatically. And maybe, if hundred people report the same thing, someone will review it. Or it will be automatically deleted.
Maybe currently... But you can definitely crowdsource / user generate some thumbs-up or thumbs-down mechanism whether something is "suggestive" content
Bad for Content Creators: It won't be consistent, so largely it will be harsh to established players in predicable and game-able ways, while new market entrants will experience either statistical randomness (influenced by time and geographic demographics) or a bias against them in favor of established content.
Still, SOME of that would help a little, if only as a second sort of data channel to compare to other effects.
AI moderation will continue to be game-able garbage until true AI, at which point, please just plug me into the matrix and give me the pill.
People struggle so much using Facebook with its feed randomness and now Instagram too (and them arguably engaging in antitrust with Whatsapp, they promised not to share data and immediately broke that promise)
Ask HN: What should I do with meet.hn? - https://news.ycombinator.com/item?id=42410582 - Dec 2024 (168 comments)
Show HN: Meet.hn – Meet the Hacker News community in your city - https://news.ycombinator.com/item?id=41539125 - Sept 2024 (372 comments)
I've been thinking for years about building support for offline meetups into HN itself somehow.
Sometimes the sketchy ads are from account hacks. I've seen this happen a couple of times with business accounts. Hackers get access to a personal Meta account that has access to Ad Manager. They run a bunch of scam ads that go live because of the previous reputation of the ad account let's it slip by the AI moderation.
The scam ads will run for a few days or more before Meta gets around to human moderation triggered by users flagging the ads. This usually leads to the ad account getting banned.
There's a pretty huge ecosystem out there for hacking ad accounts. I would imagine it adds to the overall load on the human moderation process.
Apropos of absolutely nothing, Facebook accounts have proven very easy to hack so that's yet another argument for probably breaking the company up into little pieces
Truly the plebs cannot look after themselves with such a mindset and the yoke of Zuckerberg, HN, reddit and their crud-brethren must be smashed before the altar of progress. Rise up from your leet gamer chairs, those of you whose spines still have some shape to them. I shall retrieve my bagpipes.
If you want Facebook to die, stop using it.
Meta makes 98% of its revenue from advertising, and that segment grew 25% last quarter YoY.
Meta has 3.19 billion DAUs and 3.9 billion MAUs (Almost everyone connected to internet not in China). Their engineers and scientists have invented digital cocaine and they've got very little push back.
They keep on printing more billions every quarter.
I don't use facebook and somehow facebook hasn't died yet. In fact, facebook is happy to create an account/profile for me since I refuse to and they populate it with data about me collected from anyone who mentions me on their platforms, as well as by scraping data posted to the web elsewhere and by buying up my information from data brokers. I said "No" to facebook, and facebook said "You don't get a choice".
There's no reason to think that if people stopped using facebook it would go away. Facebook has already demonstrated that they're willing to populate their site with shadow profiles and fake AI accounts to keep up the appearance of being filled with active users even as their real user count continues to decline. The AI they're using to make bots was (at least in part) trained on things I wrote, or were written about me, on sites other than facebook. When young people started leaving facebook for instagram facebook just bought instagram to keep them.
There is simply no way to for users to vote with their feet/wallet here. The best we can hope for is that advertisers will notice that AI bots viewing ads won't increase sales, finally get tired of all the other ad/click fraud going on and stop using the platform. Maybe then facebook will die, but even in that case I wouldn't hold my breath. Facebook will just keep trying to expand into other areas like healthcare, education, and shopping
None of us have any power. Most people will continue using it.
They bought Instagram and WhatsApp because they couldn't figure anything out.
https://about.fb.com/news/2025/01/meta-more-speech-fewer-mis...
edit: said while writing a massive check to a politician, of course.
Yes, men look at this, yes I clicked on some of it because I like seeing attractive women doing yoga, yes it "drives engagement", but I just deleted the app instead. Life's to short to spend on that crap. I just don't see the use of any of it now. What's worse is, a lot of the content is now AI now too, so you're not even looking at "real content" anymore, what is the use of it?
I login once every few weeks to check my messages for people who can't workout how to use other apps or iMessage.
Our family uses the Apple Photos sharing exclusively and everyone absolutely loves it. It's what Facebook used to be.
https://web.archive.org/web/20250108140722if_/https://www.40...
Anyone who wanted to get out has already done so. Others are stuck or have to use it for various reasons.
That's called "quitting". It hasn't worked yet, and people widely began the quitting years ago, so it's not going to work until they actually work hard with commitment, passion, creativity, confidence. If they don't do that, if they don't believe in it, certainly nobody else will.
And look how you've derailed the actual work. Instead of talking about how to solve the problem, we're talking about whether we should quit like you have - a conversation that accomplishes absolutely nothing for anyone. Who invited you to this meeting?
> if something doesn’t work for a given period of time (like asking people to quit FB products because X, Y, Z reasons), I move on from that idea.
Those minorities didn't succeed on their first attempt - possibly that has never happened. You should move on from the idea, and try another idea. Do not abandon the goal; lots of first iterations and second ones and nth ones fail. That's how success works, almost every time. There is no success in anything without all those failures.
The quote displays the victim perspective that is guaranteed to end in failure. The idea didn't fail, you did; you failed to make it work, to find the right idea, the right language, the right whatever. The idea and the outcome isn't the weather, something that happens to you. You are the agent, the mover and shaker, or your will certainly fail.
It is what it is, oh well.
In a democracy, if you and I don't do it, nobody will. If you are in your 30s, it's on you. There's no cavalry; there's nobody coming to the rescue; it's your country. If you don't fix the problem, you'll have the consequences.
The only people who I can imagine flagging this post are Meta PR, or users who are bringing their politics into a non-political post. If anyone has any other reasonable explanations, I would love to hear them.
Looks like the title was edited, original title: Facebook Is Censoring 404 Media Stories About Facebook's Censorship
We can only guess why users flag things. In this case, I don't have a good guess, other than that the article title is baity, so I replaced it with more neutral language from the subtitle, and (partly) rolled back the flags.
The flaggers may be correct, in any case, because this thread is noticeably terrible.
I'm sure you've considered this feature, but since flagging is such a heavyweight (in terms of ranking impact) activity, shouldn't a flagger need to at least articulate (via text or a drop-down menu) why they are flagging it? Even vague choices like "Site guideline violation," "Spam," "Astroturfing," "Political flame bait" might offer some of the missing insight.
If it's overhead for the flagger, they could be presented with a single multi choice question, eg:
What most deserves flagging: a)item title b) item content c) hn discussion d) something else
With different users presented with different questions to build up a picture.
Flags could be recorded even if the question isn't answered, but with you reserving the right to weight ones without answers less.
And yeah, I thought that this topic would have garnered a better discussion.
This site displays the posted links in chronological order which means stuff that suddenly got booted off the front page still shows up.
Hardly a new take but I think this being flagged so quickly is a concerning problem.
Don't know if you saw my other post about this, but the flagging behavior looked normal to me. I didn't see any patterns in the flagging history of those users, such as flagging posts about FB or some political position.
I believe it, I think it's clear tensions are high on these topics.
I appreciate what you do to keep us all in check so please forgive any offense my teasing caused.
"amoral" might be a better fit there given how they'll happily swing in whichever direction they think the power/money wind is blowing.
Do you want to live in a Putin-esuque society?
People who say stuff like this somehow always want to give them more power to censor, not less.
In any common usage I’m aware of.
The stories are about pornographic ads on Facebook, which are obscene, and which are not being removed.
I would totally support journalism targetted at exposing hypocrisy and authoritarian practices of big tech.
It's finally time to liberate society and to stop demonise sex and sex-related topics. Healthy views on sex would make the society a better place, and sex ads on Facebook would stop containing only scam.
I’m not sure if you are referring to the original article, but that’s exactly what it’s about.
If I want porn, i visit porhub.
You don't have to be a puritan to have those things separated, especially if scrolling in public.
Map of porn restrictions in the US: https://datawrapper.dwcdn.net/sx8Ji/2/
Pornography is about as good for society as alcohol.
What more, the societies that talk the most about vices and crack down on them tend to be the ones where the actual sexual abuse flourishes behind the scenes (e.g. Victorian England).
I don’t want to see your hoo-ha on Meta while I’m eating my pancakes.
If you don't want it, that's reasonable. If you think we should be less strict, you can make that argument too. It's a personal choice, a line each user has to decide for themselves (which is why centralized and centrally-moderated social media can't really work: we all draw different lines, and few of us agree with where Facebook's line is).
Right now though, nobody wins: advertisers are given one set of rules, and users another, so under the current system, most parties lose. Advertisers could be banned if they draw too much heat. Users bristle under the hypocrisy. Facebook comes out ahead though, with that sweet ad revenue
Facebook only wants to censor stories about Facebook, just like Musk is only interested in censoring stories about Musk. This is the natural state of these companies. They were never actually interested in being moral arbiters for the government, they were just used as proxies because of their self-interest, through the leverage that the people who regulate them and pay them have over them.
Please cite your sources.
From Zuck’s recent letter he says literally the opposite: “Ultimately, it was our decision whether or not to take content down, and we own our decisions”