Shut down the behavior with regulations or shut down the companies. Meta and TikTok have no natural right to exist if they are a net negative to society.
The problem of course is that it's difficult to disentangle the speech of algorithmic moderation from the speech of the content being moderated. And the minor issue that the vast majority of things people complain about is just plain First Amendment-protected speech, so it's not like the §230 protections actually matter as the content isn't illegal in the first place.
I also think it would be extremely unpopular. People like their recommendation engines. They want Netflix to show them more similar shows. They want Reddit to help them find more similar subreddits. I know there are HN users who don't want any of these recommendation engines, but on the whole people actually want them.
People liked cigarettes too.
>They want Netflix to show them more similar shows.
Perhaps that example was a little too revealing on your end. Netflix doesn't have/need Section 230 protections and they're doing fine.
I'm not suggesting these algorithms should be illegal, just that Section 230 protections were defined too broadly because they predated the feasibility of these type of algorithms. These platforms would be free to continue algorithmic promotion, but I believe these algorithms would be less harmful if the platforms had to worry about potential legal liability.
Think YouTube and copyright for comparison. The DMCA is far from perfect, but we have YouTube as an example of a platform that survived and even thrived in the transition from a world that didn't care about copyrighted internet video to one in which they that needed to moderate with copyright in mind.
Cigarettes weren’t made illegal. Cigarette companies are not liable for their user’s choice to consume them. What’s your point?
> Perhaps that example was a little too revealing on your end. Netflix doesn't have/need Section 230 protections and they're doing fine.
Perhaps it was a little too revealing on your end that you conveniently ignored my other example of Reddit.
If you need to cherry pick to make your point it doesn’t look very strong.
I still don’t see consistency in your argument that Section 230 should still apply to Hacker News but not, for example, Reddit, simply because one of them allows users to personalize the content they see.
They kind of were. Not completely liable, but partially. Because... um, well, uh, yeah, they are. They are literally liable.
If you produce cigarettes, you are partially responsible for people smoking. Smoking is also not a "choice", come on now. The only people who believe that are people trying to sell you cigarettes or people who have never smoked.
That's why you can't advertise cigarettes anywhere anymore and they're very hard to find. And, when you do find them, the box tells you "hey please don't smoke this". R.J. Reynolds didn't do that by fucking choice, we forced them.
Cigarette companies are not legally liable for the consequences their users encounter.
It’s really hard to have an actual discussion about anything when people are just making up their own definitions.
I don't think people really understand just how harshly we ran Tobacco companies into the ground. Many pay more per cigarette for liability than they pay to make the cigarette.
Ok! But they do have to follow a bunch of extra laws that cost them a ton of money and/or users.
Therefore the same can apply to social media algorithm companies.
The one extreme example, is just like cigarettes, there could be 18+ age verification for social media. There a big deal.
Basically all laws related to speech are abitrary. Can you define a clear and self-evident line between pornography and art as an example? Or do you agree with the Supreme Court that we just "know it when [we] see it"?
>You're not giving any reasons why the demarcation line for which feed algorithms are OK and which are not is there instead of anywhere else.
Let me just copy and paste what I said before: "The type of HN algorithm that serves the same content to every user based off global behavior is fine in my book because it is both less exploitative of the user base and a reflection of that user base's proactive decisions in upvoting/downvoting content." I can understand if one of you want to challenge that line of thought, but you both acting like I didn't give any reasoning at all is bizarre and gives me the impression that you aren't actually reading what I'm writing.
True. This is a fair point. But the expected counter argument would be that the exact line isn't the issue instead it's the justification for the principle.
IE why is personalized algorithms more dangerous than general ones.
My answer (because I mostly agree with you) is that the difference is that personalized algorithms almost feel like brain hacking. And this brain hacking simply doesn't work at scale when applied to vague general algorithms.
I'm a free speech absolutist, so I personally don't find which laws already exist on the matter to be a compelling argument. If it was up to me, I'd get rid of any such laws.
>The type of HN algorithm that serves the same content to every user based off global behavior is fine in my book because it is both less exploitative of the user base and a reflection of that user base's proactive decisions in upvoting/downvoting content.
The argument hinges entirely on the relative exploitativeness of different feed algorithms, but that metric is merely asserted with no support.
Typically free speech absolutism leads individuals into logical traps they find difficult to dig themselves out of.
But we don't even need that in this case. Private property can have all kinds of restrictions put on it based on the potential dangers and harms it causes. This in fact is one of the most common attacks on speech I see right now (Meta et el) that they will just put age requirements on sites.
Yes, "free speech absolutists" tend to define these terms in ways to hide the true arbitrary nature of their beliefs. The obvious test case is do they believe in legalizing CSAM. Either they answer "yes" and ostracize themselves from almost all of society or they say "no" and have to come up with arbitrary rules why this specific content doesn't count as speech. Either way, self-applying the label is its own red flag.
I can’t be the only one confused at these calls to have the government destroy things like searching the web, am I?
How is this a real idea being proposed on Hacker News, of all places? Not that long ago it was all about freedom on the Internet and getting angry when the government interfered with our right to speech online, and now there are calls to do drastic measures like make search engines legally untenable to run in the United States?
It’s also confusing that nobody calling for banning things or making the web unusable appears to be making the connection that the internet is global. If we passed laws that forced Google and Bing to shut down because they’re liable for results they index, what do you think the population will do? Shrug their shoulders and give up on the internet? Or go use a search engine from another country?
I can be upset about the government trying to make the world worse, and about other huge balls of power who have been making the world shitty in an ongoing fashion. Freedom of speech doesn't mean shit if a handful of people can buy up or otherwise absorb control of 90% of media and choose who gets heard. The call for regulation is an acknowledgment that the market fucked this one up. When the government threatens speech, I'll call for civil disobedience and proactive protections. When oligarchs threaten speech I'll call for regulation and punishment.
> It’s also confusing that nobody calling for banning things or making the web unusable appears to be making the connection that the internet is global. If we passed laws that forced Google and Bing to shut down because they’re liable for results they index, what do you think the population will do?
You assume that the only way to get a good, free search engine is to give control of it to some private entity. That if we don't do it in the US, people with turn to someplace else. I think you may be lacking in imagination. At a minimum, the possibility exists for nonprofit organizations to run quality search engines, but it's also possible to decouple the indexing business from the ranking provider. Google could run an index and charge for access, and ranking providers could build on top of that and recoup costs with non-tracking ads, donations, sales, whatever business model they please. Just because an unregulated market doesn't come up with a good solution doesn't mean a market under different constraints won't find a better way. And if nothing works out you always have the option of grants or a public digital infrastructure approach. There are so many things to try beyond shrugging and declaring that the market has ordained five dudes arbiters of the internet as experienced by most people.
if you find this distressing then i imagine you find it equally as distressing as a couple of corporations destroy something.
the reason the word *enshittification” has become so ubiquitous is because corporations are actively destroying the internet and desperately trying to convince us the internet is separate from “the real world”.
sometimes stopping a person from burning the house down is necessary. no matter how loudly they cry about their freedom to have a bonfire in the living room.
I think a lot of this discussion has become detached from reality and we’re just entertaining some people’s impossible fantasies about shutting down the internet and returning to the past.
For example, some teen got radicalized with racist and sexist content. That's bad in my opinion, as I'm not a racist or a sexist. But should racist or sexist speech be censored or regulated? On what grounds? How do we know other unpopular (now or in the future) speech won't be censored or regulated in the future? Again, as much as I'm not a racist or sexist, I don't think the government should have a say in whether a company should be able to promote speech like "whites/blacks are X" or "men/women are Y". What's next? Should we regulate speech about religion (Christians/Muslims/atheists are Z) or ethics (anti-war people or vegans are Q) or politics or drugs or sex?
The current situation is shitty, but giving too much power to regulators will likely make it way shittier. If not now, in the future, since passed regulations are rarely removed.
Even just mandating interoperability would likely do since that would open up the floor to competitors. Many users are well aware of the issues but don't feel they have a viable alternative that satisfies their goals.
To clarify, I use "rage bait" as an example phrase, but it includes algos that only promote engagement at any cost and other things that aren't outright dangerous, but we think are dangerous. Not, like I said, CSAM or yelling FIRE or telling people to kill themselves.
Agreed, you can't regulate speech in a targeted manner while also not doing so. You're forced to find some common aspect much more general than "rage bait". Perhaps prohibiting the targeting of certain metrics? Or even prohibiting their collection in the first place.
Can you elaborate, give some ideas, examples, etc.? What metrics? How can you define them in a consistent, safe way?
Estimated user age is an example of a metric largely unrelated to concerns regarding free speech. I doubt it has much relevance to the problem we're taking about here but hopefully you can imagine that prohibiting the targeting of ads or the curation of an algorithmic feed based on that metric would not be expected to unduly disadvantage any particular sort of speech.
In a non-adversarial political context where we trust the government and the future ones, sure, but I think without any strong guardrails, we could enact a law that's good today, but will be exploited in the future.
For targeting minors with any kind of political speech - I'd love it if it wasn't legal. But that brings its own can of worms. There's enough discussion on HN on the implications of age verification, whether on how it's done technically (privacy-preserving or not (ZKP or just shady 3rd parties); FOSS or not; on the ISP, OS or app level, etc.) and whether the mere precedent could trigger additional issues down the road.
Anyway, I'd love a society where everything is perfect, but I'm afraid of what might actually happen. With a benevolent god as a permanent ruler, I'd be happy with 100% prosecution rate against all kinds of littering, hate speech and whatnot, but in reality random crimes are easier to evade than a law passed down by a malevolent government, so I'm strongly against any kind of overreach. (Because the law tomorrow could be one we must evade if we want to resist an unethical government). Someone will likely chime in with "but complete and massive overreach has never happened so far", to which I'd reply - we're close to the point where technology will let the ones in power grab that power absolutely and forever if we them grab too much in the beginning.
I’ve reported videos that look like sexual exploitation, videos that call for violence and videos that promote hate (xyz people are cockroaches) and all I’ve gotten is that “it does not go against community guidelines” with a link to block the person who created them. So any concerns of “where do we draw the line” are in my opinion pointless because the bare minimum isn’t even being done.
What if I didn't say anything bad about a race or a sex, but said:
> I have coworkers that pester with me with their small talk about the weather every time I see them. I hate those fucking cockroaches.
That's in bad taste, sure, but should it be regulated? You may know I obviously don't hate-hate them (they're just annoying, but most of them are good people) or actually consider them cockroach-like in any meaningful aspect (they're obviously people, but with annoying tendencies). But would a regulator know the difference? What about a malicious regulator who gets paid by (ok, this is a silly example, but bear with me) the weather-talking coworker lobby to censor me? In many not-so-silly examples a regulator could silence anyone for anything (politics, sex, drugs, ethics), as long as it uses a bad word or says anything negative about anyone. I don't want to live in such a society. That much power would be abused sooner or later.
Also, I don't think people are advocating censorship, they are advocating not promoting assholes. You can have your little blog and be racist on it all you want, but let's not give these people equivalent of nukes for communication.
I'm fine with doing something, but the current "something" seems slippery.
> Banning racist and sexist content is not a slippery slope. It's just banning racist and sexist content, slope is only slippery because the salivating mouths of these social platforms grease them.
But what is "racist", exactly? See why I think it's a slippery slope and why it's ill-defined:
1. We could agree that "Let's go out and kill/enslave all the $race/$gender" is racist, but that's bad if we switch $race to any group, as it's speech that incites violence.
2. What about "$race is genetically inferior in a way (less intelligent, less athletic, more prone to $bad_behavior)"? I honestly think most differences in race/ethnicity is due to environmental factors, but what if there actually are difference in intelligence or anything like that? Should we ban speech that discusses that? Black people win running races and are great at basketball. They're prone to certain diseases, as are Caucasians or Asians. So would you ban discussing that? Or would you ban blindly asserting that $race is $Y without some sort of proof?
3. What about statements like "There are way more male bus drivers because X"? Or "men are better at Y, but women are better at Z"?
What do you think the definition of racism and sexism in this context should be? I think the line is where we incite violence towards a group, but not about discussing differences that may or may not be true.
> Also, I don't think people are advocating censorship, they are advocating not promoting assholes. You can have your little blog and be racist on it all you want, but let's not give these people equivalent of nukes for communication.
I think restricting a platform (or anyone or anything) from promoting someone IS censorship. If it's not censored, why shouldn't I be able to promote it? This honestly feels disingenuous - like "we pretend that the racist isn't censored and can have his little blog, but it's illegal to promote his little blog".
That seems more reasonable than the alternative, which is to make modifications to a complex system which you aren't sure what the outcome will be. You're more likely to cause bigger problems.
Exactly. And when we are done with them we will shut down Molson and Anheuser-Busch. Then we can go after the people who make selfy sticks. Then the company that owns that truck that cut me off last week. Basically, organization who i dislike should not be allowed to exist.
Which should make people think twice when they call for government regulation on speech as a solution to content they don't want other people to see.
The more you give the government power to control speech, the more they'll use those laws to further their own interests.
I am not trying to be funny or anything but this sounds like "if only fat kid realized that eating 10 apple pies before bedtime might be the reason s/he is fat" We already know what social media platforms are doing, not to just young people but to all people.
> If the government regulates these platforms
This is like saying "congressman care about our debt so they will vote to reduce their own salaries by 90%" - the government is not going to regulate tools they are using to control the narrative/masses etc...
What caused Gen Z to drink less than millenials? Maybe Gen Z has the answer.
They never stood a chance.
I'm for legalizing all drugs, regulating the sale, ensuring quality and purity, and educating the public. Cognitive liberty is sacred - but the dip in drinking has a whole lot of causes.
A healthier society would be more social and get out and drink more, I think.
But I find Zoomers to be rather tame in terms of drinking, smoking, drugs, unsafe sex, etc... Few of the traditional vices, really.
Oh, and weed being increasingly legal to consume.
All the dive bars where you could black out off $10-20 I drunk at in college are gone. They all faced the wrecking ball, and were replaced in the past 10-15 years with apartments over targets and cvs and family friendly restaurants. A huge concerted effort to buy up these properties in piecemeal then destroy entire blocks at a time. I have no clue where kids at my college go to drink now. I have little interest in going back either as an alumnus as they destroyed all the places of my memories.
Also seems like the science on whether science communication actual changes behavior doesn't point towards it being much of a cause here.
But like, when a pint is $12 and mixed drinks are $15+ sobriety starts looking more appealing.
Source: Am gen Z.
The latter category know who you are. You downvoted this comment.
Laws are very much fashionable, but only for us. “Rules for thee but not for me” is what's in season right now.
It turns out that if you present as an honest, non-interested party, people will call you and ask you for your advice. I do admit that the ease of this is going to be a function of the people you are up against and the subject being regulated. My point of this comment is: default to action. “You can just do things.”
nothing. if it isn't illegal, it isn't illegal.
previous generations of neurotics objected to many current (at the time) things we don't bat an eye about. when was the last time you saw anyone campaign against satanic music, violent video games, or hardcore pornography?
How about coming up with an actual defense of social media rather than an ad hominem about "neurotics"?
people who raised alarm about such things could easily be branded as conspiracy theorists. even now, at this very website, so full of well-educated folx, people who speak out against xenoestrogens, for example, are being downvoted to hell.
You don't think large portions an entire generation(s) getting cooked by social media doesn't have negative externalities that impact society as a whole?
Should unhealthy food be banned because of the second-order effects of obesity? What about mandatory church / religious service? After all, I judge that atheism has negative second-order effects on the world. Where would I get this moral authority from?
The OG hackers thought of censorship as network damage that needed to be routed around.
People who support censorship always think of themselves as smarter than the rest. Dunning-Krueger however would suggest something different.
> nothing. if it isn't illegal, it isn't illegal.
Are you suggesting that because something isn't illegal, it shouldn't be illegal?
Are you perhaps a representative of the Triangle Shirtwaist Factory?
Many things were not illegal before they became illegal.
As long as there are people who don’t acknowledge or care about the health effects it will exist. If that’s a plurality of your population then you have a fundamental population problem IF you are in the group who thinks it’s bad.
Aka every minority-majority split on every issue ever.
So the answer is: live in a society governed by science. Unfortunately none exist
Science is a lagging indicator of reality. It is by definition conservative (in that it requires rigorous, repeatable data before it can label something as true). Because of that, there's usually a pretty substantial gap between human discovery and scientific consensus.
Mindfulness was discovered, as an example, to be beneficial as far back as 500 BCE. It wasn't "proven" with science until 1979.
Sometimes we just need to rely on lived experience to make important decisions, especially regulation. We can't always wait for science.
Tell me what the leading indicator of reality is then
And PM's earnings are mostly from developing countries at this point. In the US alone, the adult smoking rate has fallen nearly 73% from 1965 to now, so clearly the regulations are working.
We need to do the same for social media. People didn't quit smoking because they suddenly got more disciplined. We just made it inconvenient. The biggest start would be get rid of algorithmic feeds and "recommendations" keep it purely chronological, only from people you explicitly follow.
More importantly, why do you think society should make smoking inconvenient - more costly, more illegal or anything like that? If I'm not blowing smoke in your face, why interfere with my desire to smoke? If it's about medical bills, just let me sign a waiver that I won't get cancer treatments or whatever, and let me buy a pack of smokes for what it should cost - a few cents per pack, not a few dollars/euro.
So yeah, this comment really reminded me to not light up whenever and "try my best" to walk a few meters away, but to really think if I'd inconvenience people.
On the other hand, if I'm alone on a street and you're walking towards me so I just pass you for a second, I can't imagine that the smell would be that bad from just a casual walk-by. When I'm passing people, I hold in my smoke till I pass them.
Even if I agree that smoking outdoors is inconsiderate and annoying to others, I could still do it at home or in dedicated areas (smoking sections in bars with good ventilation, ofr example).
> I don't see why it also has to be cheap?
If we agree on the previous points, then why not let it be cheap? Tobacco is cheap to produce. Most of the price of cigarettes is artificial, to cover medical costs and whatnot. Let's say I sign a waiver that if I get sick, I either pay through the nose or don't receive treatment at all. Would you be OK with letting me buy tobacco at it's original cost (no subsidies, no artificial fees)?
Or, as a thought experiment - let's say tobacco didn't have any smell and there were 0 negative effects of second-hand smoke. Like, you wouldn't know it if I smoked near you unless you saw me. Then what would be the justification in making smoking artificially expensive for me?
If the idea is to make everyone be healthy, live as long as possible and be productive for as long as possible, why not ban dangerous sports, too? I'm "the government" for my dog and I don't let him do anything dangerous or stupid, but he's a dog and we're people. With the supposed free will and agency we all like.
Because the government ends up paying for the medical treatment of a lot of smokers when they're older. And it's incredibly expensive. You can say you won't rely on government funds, but there's no way to actually opt out of Medicare for life or sign up to never be guaranteed stabilization when you show up at a hospital.
Nicotine is also notoriously addictive, which weakens the "my choice" argument.
>Why tax sugary drinks
That's totally a nanny state thing. Personally, I would mildly support it. But it's not a hill I'd die on.
>or ban or criminalize drugs other than the caffeine, nicotine and alcohol?
Hard drugs cause blight. People don't mind so much if they see a soda can on their street, but if they see a used needle they'll move. And again, any society with a safety net has an interest in preventing common causes of people falling into it.
>why not ban dangerous sports, too?
It hasn't proven to be a big problem at the population level. Hell, public health experts would love to have that problem, because it'd mean more people were exercising.
That's why I'd get a tattoo on my chest, if necessary, saying "Smoker!". I know that most of the price of tobacco is insurance for medical treatments. Not Medicare, as I'm not in the US, but similar. I am OK with tattooing "DO NOT STABILIZE OR CARE FOR AT ALL - SMOKER !!!1".
> Nicotine is also notoriously addictive, which weakens the "my choice" argument.
I am an adult human who participates in society and has chosen to smoke. Please treat me as an adult who has made a (bad) decision and is willing to suffer the consequences.
> sugary drinks... nanny state
Same with any drug.
> hard drugs...
People who abuse hard drugs to the point where we need to save them or others from them are most often uneducated or poor (and living in a poor neighborhoods, with all that it brings). Believe it or not, I know several people with PhDs in things like physics and biology who regularly take "hard" and/or "soft" drugs besides alcohol and nicotine. Only one needed intervention after ~10 years and it was because of pre-existing psychological issues that led him to abuse the drugs. I and lots of people I know who lead normal lives can list more 3- or 4-letter abbreviations of stuff we've tried than a HN comment will let us fill. Or maybe I'm exaggerating a bit, not sure, but you get the point.
If you look at a poor neighborhood, you'll see a lot more people with drug problems. Not because richer people don't do drugs, but because it's not an escape plan, it's not some random impure thing you get and because it's done within a safe place. It's a social issue, not a drug issue. Work on solving poverty and education, not on making us drug users feel like criminals for trying new stuff or on making our drugs more expensive. Whether it's legal like alcohol or nicotine, or illegal a psychedelic, a benzo, weed, an opioid, a dissociative or anything else, it's a drug. I am an adult. Let me experience my adulthood like I want to. You don't take drugs and that's fine, but please understand that you have no fucking idea what you're missing if you're doing it correctly. Literally anything you've likely experienced, like romantic relationships, climbing mountains, orgasms and so on, is categorically and qualitatively different from the amazing things you can experience on various drugs.
We need to culturally consider Social Media use to be disgusting or at least something to be ashamed of.
I wish this was true but I know tens of people that quit smoking and (besides myself) know 1/2 of another person that quit social media. drunk at NYE two years I offered $10k to a group of 25 people to delete all social media apps from their phones for 60 days - still have that $10k in my account. I think quitting social media is around the same as getting off hard drug addiction (like hard, hard, hard one - opioid, heroin etc...) and maybe even tougher that that - for most people.
> People didn't quit smoking because they suddenly got more disciplined. We just made it inconvenient.
I want to believe this! I just haven't personally experienced this at all (I am in my 6 decade on Earth so plenty of time around). I don't know single person that stopped smoking because they could not burn one inside restaurants/clubs/... or because it costs $18/pack or any of that. 18 year old person has very little "regulation" when it comes to smoking. Little inconveniences to move 25 feet away from the building isn't much of a deterrent IMO.
I am subjective on the matter of social media, I know that. But I am educated in its evil and would for instance never let my kid be on any social media as long as she is under my roof. This has already cause significant challenges for her (and my wife and I) but also it is an amazing learning experience to overcome silly social obstacles...
Everyone knows what the dangers of alcohol are now. We need to get reliable data one can base policy on and then let the public health system do their thing. Maybe not every health authority but enough of them to protect the species at large. Then we'll get social media out of schools, away from young people, vulnerable folks, etc.
Think about it this way: imagine if you took a million random posts or videos. You would find a wide range of political views, conspiracy theories and so on. Whatever your position on any of those issues, you could find content pushing those views.
So if your algorithm selects and distributes content that fits your desired views and suppresses content that opposes your views, how are you different from a random publisher who posts content with those exact same views?
This is kind of like the "secret third thing" of Section 230 where you get all the protections of being a platform and all the flexibility of being a publisher and we need to close that loophole. Let platforms choose which one they are.
Another example: if I create a blog and write a post that accuses my local mayor of being a drug addict and a pedophile, I can be sued for defamation. You can try the journalism defense but it won't shield you from defamation. Traditoinal media outlets are normally very careful about what they publish for this reason.
But what if I run Facebook or Twitter and one of my users says the exact same thing? Well I'm just a platform. I have a libel shield. But again, my algorithm can promote or suppress that claim. Even if I have processes to moderate that content, either by responding to a court order to take it down and/or allowing users to flag it and then take it down myself with human or AI moderation, the damage can't really be rolled back.
We've let tech companies get away with "the algorithm" being some kind of mysterious and neutral black box that just does stuff and we have no idea what. It's complete bullshit. Every behavior of such an algorithm reflects a choice made by people, period. And we need to start treating this as publishing.
I know https://www.reset.tech/ does really good work in this space, but are there others, and who is funding them?
Most of them are click baits anyways.
Why? User engagement isn't the same thing as market share.
If McDonald's trained its cashiers to insult you while taking your order, engagement would go up, and market share would go down.
Some poor schlub ML Eng has shipped a feature that wins an A/B test. They’re pushing to get promoted. Their management wants to show they’re hitting their KPIs.
An engine of destruction filled with well meaning people just hoping to advance in their careers.
You might say, it’s ultimately the designers of the incentives that matter. Even there, the leadership will change. Inevitably the needs of the capitalist machine take over.
Is it really whistleblowing when everyone already knows it?
I have my instagram, x on a locked down browser in a container with a fake profile that an LLM drives and finds the posts for specific users and compiles a gist of all the important things in my locality(or what u care about) every evening, without me ever going near that FOMO driven dumpster fire of tiktok/insta/x.
Best LLM RoI I made.
1. "Surveillance"
2. "Advertising"
3. "Scams"
4. "AI slop"
5. "Manipulated experience"
6. "Child harms"
7. Misinformation campaigns.
8. Disinformation campaigns.
9. "Doom scroll regret"
10. "Zuckavatarphilia"
But I don't claim to have the "right" opinion and am curious how other people respond to the brands. If each of you could reply, and re-list those associations in the order you experience them, I will collate the results and post them everywhere I can think of. It would go a long ways to satisfying my curiosity, and the curiosity of reporters that like to repeat things they read on the internet.
Not saying “well duh” I just think at this point I have to ask “are we going to do anything about it?”
We’ve known about the financial incentives to promote anger and outrage online for at least a decade now. So what are we going to do about it?
* "eat tide pods" * "stick a fork in electrical sockets in your school" * "destroy your school's shit" aka "Devious Licks" - bathrooms, chromebooks (jamming stuff into the charging ports to start fires...) * "drink a shitload of Benadryl to see what happens" * "steal a kia/hyundai and drive 80mph, run from the cops, etc"
...convince me that this is not a purposeful attack on US society by the CCP?
Did we forget Gresham's Law applies to content and has done so since humans could communicate?
Bad or wrong ideas are the ones that get talked about. Do we discuss the 10 issues politicians get correct, or the 1 they screw up?
Platform is irrelevant here; the exact same phenomena occurs/ed on radio and TV decades before it did on social media platforms, and in news papers centuries prior.
You have finally identified the problem. It all started with Homo habilis and misinformation has been rampant ever since. But even protozoan parasites mimic host proteins and block signals, so you really have to go a lot further back to deal with fake news.