Given that this is a case about addiction, that feels like a shockingly bad thing to say in defense of your product. Can you imagine saying the same thing about oxycodone or cigarettes?
[0] https://www.npr.org/2026/03/25/nx-s1-5746125/meta-youtube-so...
No, but unfortunately I can very easily imagine people saying it, just like the people who made loads of money from pushing those products did. Also just like the people who are profiting from the spread of gambling are saying now.
Why would someone choose to do a thing if it harms them? There are good arguments against laws that restrict personal freedoms, but this isn't one of them.
Though to be fair, I was mostly pointing out the fact that this was a pretty dumb thing to say for a case like this, especially in a jury trial.
I also hope the reasons are obvious.
It should be no surprise that children can be manipulated by highly intelligent adults.
This obviously means that tech is going to have no choice but to do "age verification". And I don't think there's much of a way to do that that wouldn't be uncomfortable for a lot of us.
Why is this not only OK but the best way for Mark to spend every waking moment of his life?
Money thing? But often would he think about his bank account versus his products, maybe it’s pure drive?
For example see the glossary in https://en.wikipedia.org/wiki/Substance_dependence
Based on the fact that many people here disagree about fundamental things, as well as the fact that “liberal” is a highly overloaded term, I think it should be obvious that it’s not obvious what you mean.
Personally, I am leery of any technical definition of “addictive” that operates outside the traditional chemical influences on physiology. So I would not describe gambling in that sense.
One might have a malady that causes gambling to take on the same physiological vibe for you, but that’s not what it means for gambling itself to be addictive.
If that is the (heavily simplified) case, is there a distinction for you between a chemically-induced dopamine release from smoking and, say, and a button you can press that magically releases dopamine in your brain?
I don’t smoke, but if I did, I’m also fairly certain I would find it hard to stop.
There is a particular hard drug that I could be easily addicted to if it were cheaper and more accessible. Nothing else like it gives me irresistible craving for more. Not nicotine, ADHD meds or speed, benzos, and not even opioids have the same effect. So after I discovered this about myself, I went on a little journey to self test myself other possible addictions.
Social media? Nope. Video games and tv? yes. Gambling, hoarding, shopping: No. Sex: yes. Exercise: yes
I can’t rationalize any of it.
If you don’t want to call that addiction, fine, but you can’t deny that it happens.
In the US, regardless of what type of addiction you have, it is considered mental health. Open market insurance like ACA does not cover mental health, so there is no addiction treatment available. Sure, you can be addicted to a substance where your body needs a fix, but it is still treated as mental care. This seems to go directly against what your thoughts are on addiction, but that doesn't say much as you're just some rando on the interweb expressing their untrained opinions. So am I, but I'm not the spouting differing opinions with nothing more to back them up than how you feel.
To be sure. But still an obviously dumb thing for a CEO to say though.
Can we definitely say gambling addiction is less serious than alcohol addiction when there's individuals who find the former harder to quit than the latter?
Not careful enough apparently: Nicotine isn't that addictive on its own, tobacco is.
* I'd even change this to say modern nicotine salts in vapes are likely to lead to dependency faster than tobacco. A 5% nicotine salt pod will contain as much nicotine as a full pack of cigarettes, and so vapers tend to consume far more nicotine in a single sitting than they ever could with a cigarette. That combined withe constant availability means users of nicotine vapes & pouches (aka, no tobacco) are likey to have a more difficult time quitting than cigarette smokers.
Bottom line, its still dangerous to dismiss nicotine's addictive potential with or without tobacco as a delivery method.
That is a very strong claim to make when the current scientific consensus strongly disagrees.
This just comes off as poorly obfuscated self selection. You own a bunch of Meta, Alphabet and other media stocks?
-- Billionaires
Edit to include: I mean this is coming the same day as the Supreme Court throwing out the piracy case against Cox Communications 9-0. Remember that this case originated with $1 billion dollar jury verdict against them! Was reversed by an appeals court 5 years later and completely invalidated today. Juries should not handle complex civil litigation, I'm sorry
Anecdote, but it does seem like a lot of younger folks I speak with are exhausted by the dark patterns and dopamine extraction that top-k social media platforms create.
If agents/AI/bots inadvertently destroy the current incarnation of social media through noise, I think we'll be better for it.
To me this statement reads as both inaccurate and ignorant of human nature. Social media was actually better when it was about individual ego (Myspace/LiveJournal); as obnoxious as that can be, today everything is worse because of petty tribalism. Most conflicts on social media are inter-tribal, whether it’s racial, political, national, or feuding “stan” culture groups. The worst problems come from groups who organize on platforms like Discord or Kiwi Farms to direct harassment campaigns against perceived enemies (or random “lolcow” victims).
Simple observation of the present world and history will tell you that a platform focused on “collective improvement” will only appeal to a small subset of potential users. Of course such a platform would not be a bad thing. Places like this (such as The WELL) used to be common when the internet was dominated by academics, futurists, and tech enthusiasts. But average people are not interested in this kind of platform, and will not participate in good faith in such an environment.
> But average people are not interested in this kind of platform, and will not participate in good faith in such an environment.
I'm not ignorant of human nature and tribalistic tendencies. The undercurrent of my comment is of an optimistic hope (or cope) that we can move past competitive individual validation programming. I'm aware that it's due to our nature, but also aware that it's exploited by dark patterns and extraction at scale through software.
Since we don’t live in a perfect world, I suppose some regulation of the industry would be fair, just as we mitigate the harms of gambling somewhat through regulation. I just worry about regulation being used as a Trojan horse to stifle political organization and/or open communication about corruption, cronyism, and oppression.
It may be that the future is more small platforms where conflict is limited to in-group conflict rather than global platforms where all of humanity’s disagreements are surfaced and turned into fodder for monetization.
This sounds like the original internet.
Before adtech took over.
Getting back to community is key.
Do you have a mechanism for this in mind, incentives-wise? I can't see this making money.
(If we hit the stretch goal, we can upgrade to a raspberry pi!)
Said little sites may run for a bit and die, and the massive monolith remains, at least until another monolith replaces them.
(EDIT: to clarify, I don't mean to build an alternative monopoly, I mean to build alternatives that are big enough to survive as a business, and big enough to be useful; A few million users as opposed to the few billions Facebook and Youtube (allegedly) have)
The reason it's hard to imagine such a thing today is because the tech giants have illegally suppressed competition for so long. If Google or Meta were ordered to break up, and Facebook/Youtube forced to try and survive as standalone businesses, all the weaknesses in their products would manifest as actual market consequences, creating opportunity for competitors to win market share. Anybody with basic coding skills or money to invest would be tripping over themselves to build competing products which actually focus on the things people want or need, because consumers will be able to choose the ones they like.
Even ignoring the adverse selection of who'd subscribe, their ARPU is higher than that in North America: https://www.statista.com/statistics/251328/facebooks-average...
We've tied our incentives to a structure which is not in alignment with continued survival. The real question is how can we incentivize ourselves to continue to exist?
The "the incentive structure says we should all destroy our brains" thing is just a small aspect of that.
> We've tied our incentives to a structure which is not in alignment with continued survival. The real question is how can we incentivize ourselves to continue to exist?
The continued survival of individuals or humanity as a whole? The individuals seem to survive OK, and arguably there's nothing that could convince them to prefer the survival of the amorphous group, save for some kind of brainwashing.
The incentives would be those which have motivated people throughout history: to create something which benefits humanity.
Next, text only platforms are nice, but niche on the modern internet. People seem to love multimedia which takes tons of bandwidth/cpu.
Paid for services don't mean spam free either. If it's worth people to pay for, it's worth spammers paying to get in and spam.
Then you have all the questions on what happens if you grow, how do you deal with working with all the laws around the world, how do you deal with other legal issues.
Having a site/service of any size can quickly become an expensive mess.
They are going to be (and AI slop already is) so much worse. Once they get ads to work well / seem natural the dark patterns will pop right back up and the money spigot will keep flowing upwards
I feel, and it's obvious to most that the only way a society can truly reform is by a shared consensus over their value system. This verdict could be thrown out by the appelette court(i feel it would be), so this is not the culmination of values resulting in what many hoped for.
It does not seem to me that this is a country where consensus on what, if anything, to put above capital will come about any time soon and with capital it's always been ask for forgiveness rather than permission.
The only time true justice that happens is when the harm becomes obvious being the shadow of a doubt(e.g. smoking) that even a monkey can tell it's time, game is up.
Perhaps if one day we can look into the brains of people with the clarity of glass and the precision of electrons and tell, will that time come when we all recognize how bad of an idea social media was.
I don't recall a lot of complaints about Facebook or Instagram when it was actually your friends' content. But now it's force-feeding everybody their own "guilty pleasure" viewing material 24 hours a day. It's fucking sick.
The guy who made the drugs is guilty. The guy who sold the drugs to kids is guilty. But parents who failed to warn kids about drugs and to oversee them properly are also guilty...
Now if we're in a discussion around the cartels, plenty of people do bring up (and there's also those that get annoyed by it) that the drug users are actually the ones funding the cartels via their drug use.
Along these lines, I think another fun comparison might be opioid use and Purdue.
eg: I grew up in a very nasty place. My neighborhood had a few pregnant 13 year old girls and a lot of drunks and smokers, including kids in their early teens. My parents kept me away from it all, while also both having full-time jobs. They put a lot of work into filtering whom I could be friends with and where I was allowed to be. THAT is the job of a parent.
Maybe you don't do this. Certainly I don't. But when looking around, its much less rosy and... lets say in blue collar families its too common to drug kids with screens so parents have off time. Heck, some are even proud how modern parents they are. Any good advice is successfully ignored, and ideas of passing some proper time with kids instead are skillfully avoided. People got lazy and generally expect miracles from life without putting in any miracle-worth efforts.
Companies just maximize their profits till laws allows them (and then some more), and expecting nice moral behavior by default is dangerously naive and never true.
But sure, "Parents often give too little fucks for long term welfare of their children", that's definitely it. Parents just hate their kids! What a useful perspective you've brought to the discussion.
Besides a general 'don't be too good' I'm really not sure what companies should do about it. It just seems like it'll lead to some judges allowing rulings against companies they don't like.
Television's goal was always viewer retention as well, they were just never able to target as well as you can on the internet.
The subsequent effects - namely being easier to consume and more addictive - eventually resulted in legislation catching up, and restrictions on what Juul could do. It being "too good" of a product parallels what we're seeing in social media seven years later.
Like most[all] all public health problems we see individualization of responsibility touted as a solution. If individualization worked, it would have already succeeded. Nothing prevents individualization except its failure of efficacy.
What does work is systems-level thinking and considering it an epidemiological problem rather than a problem of responsibility. Responsibility didn't work with the AIDS crisis, it didn't work on Juul, and it's not going to work on social media.
It is ripe for public health strategies. The biggest impediment to this is people who mistakingly believe that negative effects represent a personal moral failure.
Unless you hurt children, then its mostly legal and a slap on the wrist.
The result, in these corner cases where eating people is profitable? Shelob.
I’ve argued in the past that the right way to create the change in corporations we want is to change the laws, and people have made valid points that Congress has basically given up on doing that. But even so, civil cases with fines don’t seem like that way to make lasting change. In the analogues to the tobacco fights, there are LAWS that regulate tobacco company behaviors as a result. The civil case here isn’t going to result in any law. So what are companies supposed to do? Tiptoe around some ill defined social boundary and hope they don’t get sued? Because apparently the defense of, “no I didn’t target that person and I didn’t break any laws” is still going to get you fined. What happens when a company from a conservative location gets sued in a liberal location for causing a social ill? Oh, we’re cool with that. But what if a company from a liberal location gets sued in a conservative location for the same thing? Oh, maybe we don’t like that as much. I’m taking the libertarian side here. I know plenty of people who don’t watch TV, don’t use Facebook, and I know plenty of people that recognized that they were spending too much time on digital platforms and decided to quit or cut back. So a healthy person can self regulate on these apps, I’ve seen it and done it. I’m just not sure how much responsibility Meta and YouTube bear in my mind. If they’re getting fined $3M plus some TBD punitive amount, are we saying that this 20 year old person lost out on earning that much money in their life or would need to spend $3M on therapy because of Meta or YouTube? It feels a little steep off a fine for one person.
If Meta and YouTube really were/are making addictive products, wouldn’t a lot more people be harmed? Shouldn’t this be a class action suit where anyone with mental trauma or depression be included?
I don’t know the details of the case, but I highly doubt that this one plaintiff was targeted specifically, and I doubt their case is that unique. I read tons of news articles about cyber bullying, depression, suicide attempts, and tech addiction. Does every one get to sue Meta and YouTube for $3M now?
Broadly speaking, Section 230 differentiates between publishers and platforms. A platform is like Geocities (back in the day) where the platform provider isn't liable for the content as long as they staisfy certain requirements about havaing processes for taking down content when required. A bit like the Cox decision today, you're broadly not responsible for the actions of people using your service unless your service is explicitly designed for such things.
A publisher (in the Section 230 sense) is like any media outlet. The publisher is liable for their content but they can say what they want, basically. It's why publishers tend to have strict processes around not making defamatory or false statements, etc.
I believe that any site that uses an algorithmic news feed is, legally speaking, a publisher acting like a platform.
Example: let's just say that you, as Twitter, FB, IG or Youtube were suddenly pro-Russian in the Ukraine conflict. You change your algorithm to surface and distribute pro-Russian content and suppress pro-Ukraine content. Or you're pro-Ukrainian and you do the reverse.
How is this different from being a publisher? IMHO it isn't. You've designed your algorithm knowingly to produce a certain result.
I believe that all these platforms will end up being treated like publishers for this reason.
So, with today's ruling about platforms creating addiction, (IMHO) it's no different to surfacing content. You are choosing content to produce a certain outcome. Intentionally getting someone addicted is funtionally no different to changing their views on something.
I actually blame Google for all this because they very successfully sold the idea that "the algorithm" ranks search results like it's some neutral black box but every behavior by an algorithm represents a choice made by humans who created that algorithm.
> (c) (c)Protection for “Good Samaritan” blocking and screening of offensive material
> (1) Treatment of publisher or speaker
> No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.
This is a protection for being a platform for third-party (including user-generated) content.
Some more discussion on this distinction [2]:
> Section 230’s legal protections were created to encourage the innovation of the internet by preventing an influx of lawsuits for user content.
It goes on to talk about publishers, distributors and Internet Service Providers, the last of which I characterize as "platforms".
By the way, my view here isn't a fringe view [3]:
> One argument advanced by those who want to limit immunity for platforms is that these algorithms are a form of content creation, and should therefore be outside the scope of Section 230 immunity. Under this theory, social media companies could potentially be held liable for harmful consequences related to content otherwise created by a third party.
This is exactly my view.
[1]: https://www.law.cornell.edu/uscode/text/47/230
[2]: https://bipartisanpolicy.org/article/section-230-online-plat...
[3]: https://www.naag.org/attorney-general-journal/the-future-of-...
Jury finds Meta liable in case over child sexual exploitation on its platforms
Even if they do what you're saying, lots of people who've used any Meta property in the last 15 years has a potentially viable case, and no future work can swat those away
Maybe the social media companies could do more to combat all these. They certainly have a level of profit compared to what they provide to the average person that makes people squirm.
But does anyone believe for a second that YouTube is responsible for a person's internet / video watching addiction? It's like saying cable television is responsible for people who binge watch TV.
It's hard to square this circle while sports gambling apps and Polymarket / Kalshi are tearing through the landscape right now with no real pushback
Yes? Is there an algorithm or not?
What about the "infinite" broadcasts found on all television channels?
This is ridiculous and pathetic.
> When presented with internal research and documents showing that Meta knew young children were in fact using its platforms, Zuckerberg said he "always wished" for faster progress to identify users under 13. He insisted the company had reached the "right place over time".
Soon there will be government IDs required to use social media sites because parent's can't take phones away from their kids.