The concept is called "yellow journalism" and extends basically to the days of Joseph Pulitzer and William Randolph Hearst. Modern culture has poured gasoline on this but it's existed forever.
I think the issue is that we have scaled groupthink--people now engage in circular conversations that reinforce nonsensical beliefs. Where as they might have historically encountered 1 or 2 people that agreed with crazy or inaccurate notions and most of their environment would likely push back on outrageous ideas.
Now, you can find 1000s of people that not only agree, but reinforce biases with other facts, perceptions.
This isn’t inherent to social networks though. It is a choice by the biggest social media companies to make society worse in order to increase profits. Just give us a chronological feed of the people/topics we proactively choose to follow and much of this harm would go away. Social media and the world were better places before algorithmic feeds took over everything.
going beyond social media it's IMHO the side effect of a initially innocent looking but dangerous and toxic monetization model which we find today not just in social media but even more so in news, apps and most digital markets
Further to that, there needs to be accountability. Right now, in the UK at least, governments are not held to account, at all. They get into office with grand promises of flying elephants and golden egg laying geese but obviously never follow through with said promises. The populace, ultimately, just shrugs it off with ‘politicians lie’ and continue complaining about it within their social circles.
Our political systems are fundamentally broken. We shouldn’t care if policies are from party A or party B. All that should matter is the content of the policy and whether it is ever actually materialised.
Right now we have a situation where people are manipulated left, right and centre into believing a given party’s absolute BS manifesto which they write under the full knowledge that not delivering will have very little impact on them as they’ve just had a substantial amount of time getting paid lucrative salaries to essentially argue with a bunch of other liars in a shouting match on tele.
Remove the football-esque fandom which applies to political parties by removing any ability to publicly affiliate any given person with said party and I’d bet we see different results across the bar. Remove all this absolute nonsense of politicians promoting their ideologies on TV/Twotter etc and you will remove a lot of the brainwashing which happens. Remove the most corrupt situation of all: private firms and individuals being able to fund political parties and you level the playing field.
Obviously this is a hard pill for many to swallow as no one likes to be told they’ve essentially been brainwashed into their thoughts and ego is everything in modern society.
This is fine if you can refuse the deal. Lots of software and the companies selling it have died that way. But if you've made a product addictive or necessary for everyday survival, you have the customer by the short hairs.
The technology underlying Bluesky is deliberately designed so that it's hard to keep a customer captive. It will be interesting to see if that helps.
like if you look at original reasoning why capitalism is a good match for democracy you find arguments like voting with money etc. _alongside with what things must not be tolerated in capitalism_ or it will break. And that includes stuff like:
- monopolies, (or more generic anything having too much market power and abusing it, doesn't need to be an actual monopoly)
- unfair market practices which break fair competition
- situations which prevent actual user choice
- to much separation of the wealth of the poorest and richest in a country
- giving to much ways for money to influence politics
- using money to bare people from a fair trail/from enforcing their rights
- also I personally would add in-transparency, but I think that only really started to become a systemic issue with globalization and the digital age.
This also implies that for market wich have natural monopolies strict regulation and consumer protection is essential.
Now the points above are to some degree a check list of what has defined US economics, especially in the post-Amazone age (I say post Amazone age as the founding story of Amazone was a mile stone and is basically the idea of "let's systematically destroy any fair competition and used externally sourced money (i.e. subsidization) to forcefully create a quasi monopoly", and after that succeeded it became somewhat of the go-to approach for a lot of "speculative investment" founding).
Anyway to come back to the original point.
What we have in the US has little to do with the idea of capitalism which lead to the adoption of it in the West.
It's more like someone took it is twisting it into the most disturbing dystopian form possible, they just aren't fully done yet.
I think what we're learning is that mass (social) media means that this simply isn't preventable in a world with free speech. Even if the US had stricter campaign finance laws in line with other western democracies, there still needs to be some mechanism so that one rich guy (or even a collection of colluding rich guys) can't buy a huge megaphone like Twitter or CBS.
As long as there is no upper limit on wealth accumulation, there is no upper limit on political influence in a capitalistic democracy with free speech. Every other flaw you list is effectively downstream of that because the government is already susceptible to being compromised by wealth.
>> Social media itself is a grand experiment. What happens if you start connecting people from disparate communities, and then prioritize for outrage and emotionalism?
> It is a choice by the biggest social media companies to make society worse in order to increase profits.
I think there can be more pointy way to frame this ongoing phenomenon, such as that, the US invested in social media assuming it'll be the mainstay of its cultural dominance into the 21st century and it wasn't, but more of a giant oil pipeline with a check valve for US to be completely prone to East Asian influence, and it's scrambling at damage control.US as it is has no cultural industrial base to produce social media contents. East Asian contents, if not East Asian indigenous social media, easily win the Internet leveraging universally strong public education, without even being intentional. That's what happened, and that must be the intent of shift into rage political shows which the US/EU can at least produce, even if it weren't useful.
Viral culture requires a certain amount of freedom of expression, along with access to media.
Some times I feel like I'm the only one who remembers how toxic places like Usenet, IRC, and internet forums were before Facebook. Either that, or people only remember the past of the internet through rose colored glasses.
Complain about algorithmic feeds all you want, but internet toxicity was rampant long before modern social media platforms came along. Some of the crazy conspiracy theories and hate-filled vitriol that filled usenet groups back in the day makes the modern Facebook news feed seem tame by comparison.
In particular, I feel it’s much harder to disengage with Facebook than it is to disengage with other forms of social media. Most of my friends and acquaintances are on Facebook. I have thought about leaving Facebook due to the toxic recommendations from its feed, but it will be much harder for me to keep up with life events from my friends and acquaintances, and it would also be harder for me to share my own life events.
With that said, the degradation of Facebook’s feed has encouraged me to think of a long-term solution: replacing Facebook with newsletters sent occasionally with life updates. I could use Flickr for sharing photos. If my friends like my newsletters, I could try to convince them to set up similar newsletters, especially if I made software that made setting up such newsletters easy.
No ads, no algorithmic feeds, just HTML-based email.
I think the main thing algorithmic feeds did was present the toxicity as the norm, as opposed to it being a choice you make. Like I used to be part of a forum back in the early 2000s. Every few weeks the top most replied thread would be some rage bait, or sensational thread. those threads will keep getting pushed to the top and remain at the top of the forum for a while and grow very quickly as a ton of people keep replying and pushing it to the top. But you could easily see that everyone else is carrying on with their day. You ignore it and move on. You sort by newest or filter it out and you’re good. It was clear that this is a particular heated thread and you can avoid it. Also mods would often move it to a controversial sub forum (or lock it all together if they were heavy handed) So you sort of had to go out of your way to get there and then you would know that you are actively walking into a “controversial section” or “conspiracy” forum etc. It wasn’t viewed as normal. You were a crazy person if you kept linking and talking about that crazy place.
With algorithmic feeds, it’s the norm. You’re not seeking and getting to shady corners of the internet or subscribing to a crazy usenet newsgroup to feed your own interest in rage or follow a conspiracy. You are just going to Facebook or twitter or Reddit or YouTube homepage. Literally the most mainstream biggest companies in the US homepages. Just like every one else.
First, it automatically funnels people into information silos which are increasingly deep and narrow. On the old internet, one could silo themselves only to a limited extent; it would still be necessary to regularly interact with more mainstream people and ideas. Now, the algorithm “helpfully” filters out anything it decides a person would not be interested in—like information which might challenge their world view in any meaningful way. In the past, it was necessary to engage with at least some outside influences, which helped to mediate people’s most extreme beliefs. Today, the algorithm successfully proxies those interactions through alternative sources which do the work of repackaging them in a way that is guaranteed to reinforce, rather than challenge, a person’s unrealistic world view.
Many of these information silos are also built at least in part from disinformation, and many people caught in them would have never been exposed to that disinformation in the absence of the algorithm promoting it to them. In the days of Usenet, a person would have to get a recommendation from another human participant, or they would have to actively seek something out, to be exposed to it. Those natural guardrails are gone. Now, an algorithm programmed to maximise engagement is in charge of deciding what people see every day, and it’s different for every person.
Second, the algorithm pushes content without appropriate shared cultural context into faces of many people who will then misunderstand it. We each exist in separate social contexts with in-jokes, shorthands for communication, etc., but the algorithm doesn’t care about any of that, it only cares for engagement. So you end up with today’s “internet winner” who made some dumb joke that only their friend group would really understand, and it blows up because to an outsider it looks awful. The algorithm amplifies this to the feeds of more people who don’t have an appropriate context, using the engagement metric to prioritise it over other more salient content. Now half the world is expressing outrage over a misunderstanding—one which would probably never have happened if not for the algorithm boosting the message.
Because there is no Planet B, it is impossible to say whether things would be where they are today if everything were the same except without the algorithmic feed. (And, of course, nothing happens in a vacuum; if our society were already working well for most people, there would not be so much toxicity for the algorithm to find and exploit.) Perhaps the current state of the world was an inevitability once every unhinged person could find 10,000 of their closest friends who also believe that pi is exactly 3, and the algorithm only accelerated this process. But the available body of research leads me to conclude, like the OP, that the algorithm is uniquely bad. I would go so far as to suggest it may be a Great Filter level threat due to the way it enables widespread reality-splitting in a geographically dispersed way. (And if not the recommendation algorithm on its own, certainly the one that is combined with an LLM.)
soso, it's mostly that freely accessible channels need their content to be in a certain ~PG/age protection range (and in many countries that also changes depending on the time of the day, not sure about the US)
beyond that the constitution disallows any further regulation of actual content
through that doesn't mean that they can't apply subtle pressure indirectly.
Is that legal? no.
Anyway done for years? yes.
But mostly subtle not forced, i.e. let's say you give "suggestions" not required changes.
Except in recent years it has become a lot less subtle and much more forced. Not just giving non binding "suggestions" but also harass media outlets in other seemingly unrelated ways if they don't follow your "suggestions".
PS: Like seriously it often looks like the US doesn't really understand what free speech is about (as in some of the more important points are freedom of journalism, teaching and also showing your opinions through demonstrations and similar.). And why many historians find it good but suboptimal and why e.g. the approach to free speech was revisited when drafting the west German constitution instead of just more or less copying the US constitution (the US but also France, UK had some say in the drafting of it, it was originally meant to be temporary until reunification, but in the end was mostly kept verbatim during unification as it worked out quite well).
In the US there is free speech protecting the ability of people to say what they want.
Public TV has limitations on broadcast of certain material like pornography, obviously, but the government can’t come in and “control” the opinions of journalists and newscasters.
The current US admin has tried to put pressure on broadcasters it disagrees with and it’s definitely not a good thing.
You really do not want to encourage governments to “control” what topics cannot be discussed or what speech is regulated. Sooner or later the government will use that against someone you agree with for their own power.
https://corp.oup.com/news/the-oxford-word-of-the-year-2025-i...
Sounds like news broadcasts. Throw in some politics, murders, rapes and economic downturns and you've got your audience hooked watching through the ads.
More like the exposure of institutions. It’s not like they were more noble previously, their failings were just less widely understood. How much of America knew about Tuskegee before the internet? Or the time National Geographic told us all about the Archaeoraptor ignoring prior warnings?
The above view is also wildly myopic. You thought modern society overcame populist ideas, extreme ideas, and social revolution being very popular historically? Human nature does not change.
Another thing that doesn’t change? There are always, as evidenced by your own comment, always people saying the system wasn’t responsible, it’s external forces harming the system. The system is immaculate, the proletariat are stupid. The monarchy didn’t cause the revolution, ignorant ideologues did. In any other context, that’s called black and white thinking.
https://en.wikipedia.org/wiki/Unethical_human_experimentatio...
I never understood why this doesn't alarm more people on a deep level.
Heck you wouldn't get ethics approval for animal studies on half of what we know social media companies do, and for good reason. Why do we allow this?
Also I would like an example of something a social media company does that you wouldn't be able to get approval to do on animals. That claim sounds ridiculous.
One possible example is the emotion manipulation study Facebook did over a decade ago[0]. I don't know how you would perform an experiment like this on animals, but Facebook has demonstrated a desire to understand all the different ways its platform can be used to alter user behavior and emotions.
0: https://www.npr.org/sections/alltechconsidered/2014/06/30/32...
I think this is a good example of how disconnected and abstract the conversations about social media have become. There's a common theme in these HN threads where everything social media companies do is talked about like some evil foreign concept, but if any of us were to do basic A/B testing on a website then that's understandable.
Likewise, the dissonance of calling for heavy regulations on social media sites or restrictions on freedom of speech is ironic given that Hacker News fits the definition of a social media site with an algorithmic feed. There's a deep otherness ascribed to what's called social media and what gets a pass.
It gets really weird in the threads demanding ID verification for social media websites. I occasionally jump into those threads and ask those people if they'd be willing to submit to ID verification to use Hacker News and it turns into mental gymnastics to claim that Hacker News (and any other social platforms they use like Discord or IRC) would be exempt under their ideal laws. Only platforms other people use would be impacted by all of these restrictions and regulations.
Not sure what public infrastructure has to do with it. Access to public infrastructure doesn't confer the right to regulate anything beyond how the public infrastructure is used. And in the case of Meta, the internet infrastructure they rely on is overwhelmingly private anyway.
Fun fact, the last data privacy law the US passed was about video stores not sharing your rentals. Maybe it's time we start passing more, after all it's not like these companies HAVE to conduct business this way.
It's all completely arbitrary, there's no reason why social media companies can't be legally compelled to divest from all user PII and be forced to go to regulated third party companies for such information. Or force social media companies to allow export of data or forcing them to follow consistent standards so competitors can easily enter the realm and users can easily follow too.
You can go for the throat and say that social media companies can't own an advertising platform either.
Before you go all "oh no the government should help the business magnates more, not the users." I suggest you study how monopolies existed in the 19th century because they look no different than the corporate structure of any big tech company, and see how government finally regulated those bloodsuckers back then.
I must be really good at asking questions if they have that kind of power. So here's another. How would we ever even know those changes were making users more depressed if the company didn't do research on them? Which they would never do if you make it a bureaucratic pain in the ass to do it.
And, no, I would much rather the companies that I explicitly create an account and interact with to be the ones holding my data rather than some shady 3rd parties.
I don't know why people are being overly reactive to the comment.
Research means different things to different people. For me, research means "published in academic journals". He is merely trying to get everyone on the same page before a conversation ensues.
These types of comments are common on this site because we are actually interested in how things work in practice. We don’t like to stop at just saying “companies shouldn’t be allowed to do problematic research without approval”, we like to think about how you could ever make that idea a reality.
If we are serious about stopping problematic corporate research, we have to ask these questions. To regulate something, you have to be able to define it. What sort of research are we trying to regulate? The person you replied to gave a few examples of things that are clearly ‘research’ and probably aren’t things we would want to prevent, so if we are serious about regulating this we would need a definition that includes the bad stuff but doesn’t include the stuff we don’t want to regulate.
If we don’t ask these questions, we can never move past hand wringing.
If they are going to publish in academic journals, they will have to answer to those bodies. Whether those bodies have any teeth is a whole other matter.
These bodies are exactly what makes academia so insufferable. They're just too filled with overly neurotic people who investigate research way past the point of diminishing returns because they are incentivized to do so. If I were to go down the research route, there is no way I wouldn't want to do in a private sector.
Abstract: "To what extent is social media research independent from industry influence? Leveraging openly available data, we show that half of the research published in top journals has disclosable ties to industry in the form of prior funding, collaboration, or employment. However, the majority of these ties go undisclosed in the published research. These trends do not arise from broad scientific engagement with industry, but rather from a select group of scientists who maintain long-lasting relationships with industry. Undisclosed ties to industry are common not just among authors, but among reviewers and academic editors during manuscript evaluation. Further, industry-tied research garners more attention within the academy, among policymakers, on social media, and in the news. Finally, we find evidence that industry ties are associated with a topical focus away from impacts of platform-scale features. Together, these findings suggest industry influence in social media research is extensive, impactful, and often opaque. Going forward there is a need to strengthen disclosure norms and implement policies to ensure the visibility of independent research, and the integrity of industry supported research. "
I meant, I no longer know who to trust. It feels like the only solution is to go live in a forest, and disconnect from everything.
Also feel you wrt living in a forest and leaving this all behind.
As much as I approve of living in forests, you don't need to go that far. Tech bros are fond of things being "frictionless," so add some friction. Delete the social media apps from your phone and use their websites instead. Don't bookmark the sites, but make yourself type in the URLs each time you want to visit. If each visit is intentional, instead of something you do automatically when you're bored, you'll have a better experience.
Because that's where people with that expertise work.
This comes up somewhat frequently in discussions of pet food. Most of the companies doing research into pet food - e.g. doing feeding studies, nutritional analysis, etc - are the manufacturers of those foods. This isn't because there's some dark conspiracy of pet food companies to suppress independent research; it's simply because no one else is funding research in the field.
Academia is basically a reputation laundering industry. If the cigarette people said smokes good or the oil people you'd never believe them. But they and their competitors fund labs at universities, and sure those universities may publish stuff they don't like from time to time, but overall things are gonna trend toward "not harmful to benefactors". And then what gets published gets used as the basis for decisions on how to direct your tax dollars, deploy state violence for or against certain things, etc, etc. And of course (some of) the academics want to do research that drives humanity forward or whatever, but they're basically stuck selling their labor to (after several layers in between) the donors for decades in order to eek out a little bit of what they want.
It's not just "how the sausage is made" that's the problem. It's who you're sourcing the ingredients for, who you're paying off for the permit to run the factory, who's supplying you labor. You can't fix this with minor process adjustments.
Whole industries are paid for decades, the hope are the independent journalists with no ties to anybody but the public they wanna reach.
Find one independent journalist on YT with lots of information and sources for them, and you will noticed how we have been living in a lie.
On the other hand it puts a big fat question mark over any policy-affecting findings since there's an incentive not to piss off the donors/helpers.
And yet one person kills a CEO, and they're a terrorist.
In terms of health insurance, which is the industry where the CEO got shot, we can pretty definitively say that it's worse. More centralized systems in Europe tend to perform better. If you double the number of insurance companies, then you double the number of different systems every hospital has to integrate with.
We see this on the internet too. It's massively more centralized than 20 years ago, and when Cloudflare goes down it's major news. But from a user's perspective the internet is more reliable than ever. It's just that when 1% of users face an outage once a day it gets no attention, but when 100% of users face an outage once a year everyone hears about it even though it is more reliable than the former scenario.
I'm talking about intentional actions that lead to deaths. E.g. [1] and [2], but there are numerous such examples. There is no plausible defense for this. It is pure evil.
Other poster demonstrated that you have no idea what "need" is. So you also have no idea what a "shortcoming of the present system" is either, because how the hell would you even know?
But that doesn't seem to be true at all. He just had a whole lot of righteous anger, I guess. Gotta be careful with that stuff.
There is a great deal of injustice in the world. Psychologically healthy adults have learned to add a reflection step between anger and action.
By all evidence, Luigi is a smart guy. So one can only speculate on his psychological health, or whether he believed that there was an effective response to the problem which included murdering an abstract impersonal enemy.
I'm stumped, honestly. The simplest explanations are mental illness, or a hero complex (but I repeat myself). Maybe we'll learn someday.
"Former UnitedHealth CEO Andrew Witty published an op-ed in The New York Times shortly after the killing, expressing sympathy with public frustrations over the “flawed” healthcare system. The CEO of another insurer called on the industry to rebuild trust with the wider public, writing: “We are sorry, and we can and will be better.”
Mr. Thompson’s death also forced a public reckoning over prior authorization. In June, nearly 50 insurers, including UnitedHealthcare, Aetna, Cigna and Humana, signed a voluntary pledge to streamline prior authorization processes, reduce the number of procedures requiring authorization and ensure all clinical denials are reviewed by medical professionals. "
https://www.beckerspayer.com/payer/one-year-after-ceo-killin...
When one gets fired, quits, retires, or dies, you get a new one. Pretty fungible, honestly.
But yeah, shooting people is a bad decision in almost all cases.