It's important to note they aren't creating laws against infinite scrolling, but are ruling against addictive design and pointing to infinite scrolling as an example of it. The wording here is fascinating, mainly because they're effectively acting as arbiters of "vibes". They point to certain features they'd like them to change, but there is no specific ruling around what you can/can't do.
My initial reaction was that this was a terrible precedent, but after thinking on it more I asked myself, "well what specific laws would I write to combat addictive design?". Everything I thought of would have some way or workaround that could be found, and equally would have terrible consequences on situations where this is actually quite valuable. IE if you disallow infinite scrolling, what page sizes are allowed? Can I just have a page of 10,000 elements that lazy load?
Regardless of your take around whether this is EU overreach, I'm glad they're not implementing strict laws around what you can/can't do - there are valuable situations for these UI patterns, even if in combination they can create addictive experiences. Still, I do think that overregulation here will lead to services being fractured. I was writing about this earlier this morning (https://news.ycombinator.com/item?id=47005367), but the regulated friction of major platforms (ie discord w/ ID laws) is on a collision course with the ease of vibe coding up your own. When that happens, these comissions are going to need to think long and hard around having a few large companies to watch over is better than millions of small micro-niche ones.
Hear me out: banning advertising on the Internet. It's the only way. It's the primordial domino tile. You knock that one over, every other tile follows suit. It's the mother of chain reactions. There would be no social media, no Internet as we know it. Imagine having TikTok, YouTube or X trying to survive on subscriptions alone in their current iterations. Impossible. They'd need to change their top priority from "maximizing engagement by fostering addictive behavior" to "offering a product with enough quality for someone to pay a fee in order to be able to use it".
I.e displaying an ads about Sentry on a ads technica page, find . Displaying an ads about hiking equipment on ars techbica because i made a google search abd it is estimated I like that -> not fine. It would kill all the incentive to overtrack the ROI will no more justify the cost.
The thing that changed in the mid-2000s was that we found ways to not only provide these services, but extract billions of dollars while doing it. Good for Mark Zuckerberg, but I doubt the internet would be hurting without that.
Now fucking everything about the world is a hustle to monetize every possible nook and cranny around content. There isn't even content anymore, it's nearly all AI slop as a substrate to grow ads on.
I am nostalgic for the era when I found "punch the monkey" irritating. People used to make websites as a labor of love.
My comment about not having a right to business models is in some ways more general. Regardless of whether this business model is protected for some other reason, business models in general aren't, and it's a common flawed argument that they are.
Its been 30 years and no one has been able to continue that "etc".
In practice, this cuts of 80% of the worlds population.
Catering to the lowest common denominator is how we got the Burger King guy on spirit airlines.
I have and do pay for website access. That doesn't mean much if the current model flocks to no paid services.
Say a a kid started throwing tantrums at school. By not punishing/ removing him you restrict the freedom of everyone else.
Fuck ads. What's absurd is tolerating them and the damage they do to media, consumers, kids, lesser and/or more honest businesses, culture, products, and so on all the way to the Windows and macOS system UIs.
At the same time, this has the same energy of "if we release all the files, the system will collapse". Maybe we need the billionaires to feel some pain sometimes (even if yes, we'll feel more overall).
Ads are speech.
No, they are not.
People have been brainwashed and legal systems have been paid and bought for to consider them as such, just like corporations have been whitewashed to be treated as "persons".
In any case, we regulate all other kinds of speech as well: explicit content, libel, classified information, cigarette ads, and so on.
I don’t think you need to count companies being able to put any message out there as free speech.
Granted, that's proven to be a horrible concept. So let's repeal that.
Five dollars a month to subscribe or whatever. If people get the value out of it, you can get them to pay it.
Maybe it could be good again, but not on the path it's on.
[0]: https://matthewsinclair.com/blog/0177-what-if-we-taxed-adver...
If you try to regulate this, everything will be an ad in disguise.
In my opinion, that's the direction we are heading towards with AI anyway.
I'm surprised we haven't seen an instance of 'pay to increase bias towards my product in training' yet.
Require that every user must be shown the exact same ads (probabilistically). Don't allow any kind of interest or demographic based targeting for paid content.
Advertisers would still be able to place Ads on pages they know there target audience goes, but wouldn't be able to make those same Ads follow that target audience around the internet.
Interestingly, there are autocratic governments who do try to ban vague things. The goal there is selective enforcement, not good public policy.
You don’t need to go too far down the rabbit hole. You need to introduce friction to ads.
Subscription revenues are tiny when compared ad revenue, so I expect people will resist this idea ferociously.
Using an ad-blocker gets rid of most visible ads online, but there's still paid content in various forms which may be more effective than straight adverts anyway.
Today, on June 1st 2030, I'd like to announce the launch of the fediverse cooperative, the first cooperative social media platform.
We pay out all our membership fees (minus hosting costs) to our entire cooperative.
To use our servers, you'll obviously have to become member of our cooperative, paying $100 a month in membership fees, and earning $99.50 a month in dividends.
But we can build a culture that knows how to avoid ads and the technology to enable it.
The product is the same as the speech, whereas in advertising the speech is in sycophantic service of another product.
That won't convince anyone.
We can have word of mouth, genuine, in forums and social media.
We can have reviews, genuine, in websites.
We can have websites which present new products and business, not as paid sponsorships.
We can search on our own initiative and go to their website.
We can have online catalogs.
And tons of other ways.
Making global business harder and forcing things more local actually sounds like a great benefit.
We could use less 1T companies and more a few billion or 100s of millions level companies too. I miss the "focused on Mac and iPod" era Apple.
The follow industry conventions, visit registries of industry websites, have professional lists where companies submit their announcements (and not to the general public) and so on.
>Try your hand at starting a business and trying to sell goods or services using these methods and see how well it works.
If advertising is banned, it will work just as good as for any competitor.
Many don't think businesses should exist in the first place.
Suppose you sell insulation and replacing the insulation in an existing house could save $2 in heating and cooling for each $1 the insulation costs. Most people know that insulation exists, but what causes them to realize that they should be in the market for it when they "already have it"?
The insulation example can be solved by publication of data on average heating costs. When people learn that their neighbors are paying less they will be naturally incentivized to investigate why. Equivalent problems can be solved with the same general technique.
Now all of the "brought to you by America's <industry group>" ads are back in. So is every pharma ad and every other patented product because they don't have to tell you a brand when there is only one producer.
> The insulation example can be solved by publication of data on average heating costs.
Publication where? In the bottom of a locked filing cabinet stuck in a disused lavatory with a sign on the door saying "Beware of the Leopard"? Also, who decides to publish it, decides what it will say or pays the costs of writing and distributing it?
No, but they can convince a disinterested party that people aren't aware of <fact about industry that industry wants people to know> because that's actually true.
> Minimum competition requirements can be imposed.
But that brings back the original problem. Company invents new patented invention, how does anybody find out about it?
> a solution being imperfect is not a good reason to leave the problem unaddressed.
This is the legislator's fallacy. Something must be done, this is something, therefore we must do this.
If a proposal is full of problems and holes, the alternative isn't necessarily to do nothing, but rather to find a different approach to the problem.
Proposals that are full of holes are often worse than nothing, because the costs are evaluated in comparison to the ostensible benefit, but then in practice you get only a fraction of the benefit because of the holes. And then people say "well a little is better than nothing" while not accounting for the fact that weighing all of the costs against only a fraction of the benefit has left you underwater.
But I acknowledge that there may be edge cases. My point is that the existence of edge cases does not mean we should permit the harm to continue. Those specific edge cases can be identified and patched. My suggestion is a hypothetical example of a potential such patch, one that might possibly be a net benefit. Maybe it would actually be a net harm, and the restriction should be absolute. The specifics don't matter, it's merely an example to illustrate how edge cases might be patched.
Your objections to this hypothetical example are nit-picking the edge cases of an edge case. They're so insignificant in comparison to the potential harm reduction of preventing advertising that they can be safely ignored.
The same legit things that can cause them to realize it today. Word of mouth, a product review, a personal search that landed them on a new company website, a curated catalog (as long as those things are not selling their placements).
An ad is the worse thing to find such things out - the huge majority ranges from misleading to criminally misleading to bullshit.
but you can help this by banning all forms of active tracking.
Static ads only, no click tracking, and complete ban on profiling clients and especially on adjusting prices based on client/possible client behavior patterns.
Websites can too.
If you know the kind of articles your readers like, you can find ads that your readers will like.
Pervasive surveillance to make a system that's practically worse than the alternative that doesn't require mass surveillance, and is much simpler and cheaper. Did I say amusing before? Depressing is probably a better fit.
To become a member of the EU, you have to first join the Council of Europe and its European Convention on Human Rights – article 10 of which guarantees the right to free expression. The EU also has its own Charter of Fundamental Rights which says the same thing. And the plan is for the EU to become a party to the Convention in its own right, although that's got bogged down in technical legal disputes and still hasn't happened, despite the 2009 Lisbon Treaty mandating it.
The US First Amendment has no exceptions as worded, but the US Supreme Court has read some into it. The Convention has exceptions listed in the text, although they are vaguely defined – but like the US, the European Court of Human Rights has developed extensive case law on the scope of those exceptions.
The big difference in practice is the US exceptions end up being significantly more narrow than those in Europe. However, given in both, the details of the exceptions are in case law – courts can and do change their mind, so this difference could potentially change (either by narrowing or broadening) in the decades to come.
> "Article 10 of the Human Rights Act: Freedom of expression
1. Everyone has the right to freedom of expression. This right shall include freedom to hold opinions and to receive and impart information and ideas without interference by public authority and regardless of frontiers. This Article shall not prevent States from requiring the licensing of broadcasting, television or cinema enterprises.
2. The exercise of these freedoms, since it carries with it duties and responsibilities, may be subject to such formalities, conditions, restrictions or penalties as are prescribed by law and are necessary in a democratic society, in the interests of national security, territorial disorder or crime, for the protection of health or morals, for the protection of the reputation or rights of others, for preventing the disclosure of information received in confidence, or for maintaining the authority and impartiality of the judiciary."
Seems to be about as strong as the Soviet Constitution's protections: https://www.departments.bucknell.edu/russian/const/77cons02....
In the 2015 case Perinçek v. Switzerland, the European Court of Human Rights applied Article 10 to find against a Swiss law making it a crime to deny the Armenian genocide. Can you imagine a Soviet court ever striking down a genocide denial law?
The decision is controversial because it introduces a double standard into the Court's case law – it had previously upheld laws criminalising Holocaust denial, now it sought to distinguish the Holocaust from the Armenian genocide in a way many find arbitrary and distasteful – the consistent thing would be to either allow denying both or disallow denying both.
But still, it just shows how mistaken your Soviet comparison is.
The most plausible way would be if the one you're paying to distribute it has some kind of exclusive control or market power over the distribution channel so that you're paying them a premium over competing distributors. But then wouldn't the best way to prevent them from extracting that premium to be to make it so nobody has exclusive control over distribution channels, e.g. by breaking up concentrated markets or requiring federated protocols?
That's a different model than paying a technical writer to do technical writing.
But now how are you distributing either of them?
Yes. You self host it as a company, and it can only be reproduced (if they wish) in outlets (say review sites) when there's no payment or compensation of any kind involved for that.
You have your own website and your copy on it. Don't start that "but if you pay some hosting provider to host that website that would be advertising", or the
"And how do you self-host distribution? You would have to run your own fiber to every customer's house or spin up your own postal service or you're paying someone to do that."
that borders on being obtuse on purpose.
Yes. You're still allowed to pay someone - for YOUR OWN corporate website. Still your copy is not on my fucking social media, news websites, forums, tv programming, and so on.
>and now you have the caravan of trucks going through the loophole because Facebook et al get into the hosting business and then their "spam filter" trusts the things on their own hosting service so using it becomes the way to get seen.
They can go into the hosting business all they want. If they show what they host (i.e. ads) on my social media feed, or links to it there, they're breaking the law. What they host should only be accessible when somebody consciously navigates to it in some hierarchical scheme or directly enters the address/handle.
They're already hosting everything in your feed, and if there were actually no ads then everyone on the site would be paying them to do it, at which point what do you expect to be in your feed?
In any case it's trivial to come up with such a definition that covers most cases. Doesn't matter if it doesn't cover some gray areas or 100% of it. Laws can be supplemented and ammended.
We don't have an all-encompassing definition of porn either, but we have legal definitions, and we have legal frameworks regarding it.
That's exactly the thing that matters when you're dealing with something where every loophole is going to have a caravan of trucks driving through it.
> We don't have an all-encompassing definition of porn either, but we have legal definitions, and we have legal frameworks regarding it.
You're picking the thing which is a hopeless disaster as your exemplar?
Everything with profit "is going to have a caravan of trucks driving through it". He have laws anyway for those things, and for the most part, they're effective. I'd take a relative improvement even if it's not 100% over free reign.
>You're picking the thing which is a hopeless disaster as your exemplar?
I don't consider it a "hopeless disaster" (except in it's effects on society). As a business it's regulated, and for the most part, stays and follows within those regulations. The existence of dark illegal versions of it, or exploitation in the industry, doesn't negate this.
For the most part they're trash. There is a narrow range of effectiveness where the cost of compliance is low and thereby can be exceeded by the expected cost of reasonable penalties imposed at something significantly less than 100% effective enforcement, e.g. essentially all gas stations stopped selling leaded gasoline because unleaded gasoline isn't that much more expensive.
The cost of complying with a ban on advertising is high, so the amount of effort that will be put into bypassing it will be high, which is the situation where that doesn't work.
> As a business it's regulated, and for the most part, stays and follows within those regulations.
It essentially bifurcated content creation and distribution into "this is 100% porn" and "this company will not produce or carry anything that would cause it to have to comply with those rules" which inhibits quality for anything that has to go in the "porn" box and pressures anything in the "not porn" box to be sufficiently nerfed that they don't have to hire more lawyers.
The combination of "most human communication now happens via social media" and "expressing your own sexuality is effectively banned on most major social media platforms" is probably a significant contributor to the fact that people are having less sex now and the fertility rate is continuing to decline.
The ambiguity in the definition frequently causes people to be harassed or subject to legal risk when doing sex education, anatomy, etc. when they're trying to operate openly with a physical presence in a relevant jurisdiction. Conversely, it's the internet and it's global so every terrible thing you'd want to protect anyone from is all still out there and most of the rules are imposing useless costs for no benefits, or worse, causing things to end up in places where there are no rules, not even the ones that have nothing to do with sex.
It's now being used as an guise to extract ID from everyone for surveillance purposes.
It's a solid example of bad regulations setting fire to the omnishambles.
Conflating advertising with free speech is like conflating sex work with reproductive rights.
If they just banned infinite scrolling someone would come up with something equivalent that works slightly differently. Now they need a whole new law. It’s just constant whack-a-mole.
So instead they seem to ban goals. Your thing accomplishes that goal? It’s banned.
It’s a pretty different way than how we seem to do things in the US. But I can see upsides.
You don't, but the EU doesn't need to care about American ideas of free speech. This is actually in some sense the biggest hurdle to all of this, the psychologically defensive posture that somehow assumes that on European territory this should even be a concern. Also as a sidenote this is even within America a kind of revisionist history, the 20th century had plenty of broadcasting and licensing rules. This unfettered, deregulated commercial environment is even in the US a creature of the last ~40-50 years, and those unchained companies, not unironically, then went on to convince everyone to defend that state of affairs given each opportunity.
At the core of the first amendment is the idea that people should not be punished for criticizing their government. I think that idea is worth preserving. But the idea that people are free to say anything they choose, in any context, regardless of its factual status, and also that their permission to do so is limited only by the resources they can muster to promulgate their speech, is an unwarranted extension of that concept.
The cold hard reality is that no matter how much you trust the people in the government today, eventually they will be replaced by people you consider to be the scum of the earth. And when that day comes, you will curse the day you allowed the government to punish speech, because you'll see speech you consider perfectly justified become illegal.
Make a lot of noise about privacy, force massive spend in the general direction of the EU, fund a new layer of bureaucracy, and actually do nothing to harm the toxic business models that were nominally the impetus for all this. Because someone’s gotta pay for all this new “privacy” infrastructure…
In a sense, I'm just agreeing with a fellow comment in the vicinity of this thread that said GDPR is already the EU's shot at banning (targeted) ads---it's just implemented piss-poorly. Personally formulated, my sentiment is that GDPR as it stands today is a step in the right direction towards scaling back advertisement overreach but we have a long way to go still.
Ofc it's impossible to blanket ban targeted ads because at best you end up in a philosophical argument about what counts as "targeting", at worse you either (a) indiscriminately kill a whole industry with a lot of collateral casualties or (b) just make internet advertising even worse for all of us.
My position here is that ads can be fine if they
1. are even somewhat relevant to me.
2. didn't harvest user data to target me.
3. are not annoyingly placed.
4. are not malware vectors/do not hijack your experience with dark patterns when you do click them.
To be super clear on the kind of guy talking from his soapbox here: I only browse YT on a browser with ad blockers but I don't mind sponsor segments in the videos I watch. They're a small annoyance but IMO trying to skip them is already a bigger annoyance hence why I don't even bother at all. That said, I've never converted from eyeball to even customer from sponsor segments.
I'd call this the "pre-algorithmic" advertising approach. It's how your eyeballs crossed ads in the 90s and IMO if we can impose this approach/model in the internet, then we can strike a good balance of having corporations make money off the internet and keeping the internet healthy.
I want to be able to browse the internet for free, where the sites have a sustainable business model and can therefore make high-quality content, but I don't want to have to sign up to a subscription for everything.
I want to be able to host websites that get lots of views, but I don't want that popularity to cost me.
Can someone please come up with something that solves all of these dilemmas for me?
Let's be clear what we mean by "evil". My time is valuable. I have a finite number of heartbeats before I die. If I have to spend 30 seconds watching a damn soap commercial before I get to watch a Twitch stream, that's 36 heartbeats I will never get back. Sure, I could press mute and do something else for 30 seconds that seems more valuable, but that doesn't fit my schedule. Stealing heartbeats is evil.
I have so far optimized against wasting my heartbeats by paying subscriptions to remove ads. Spotify, Twitch, YouTube, Amazon Prime, Apple TV+, and a bunch of others I'm forgetting. Because it's worth $150/month or whatever to not waste my time with the most boring, uninteresting, irrelevant, nauseating crap that advertisers come up with.
And thank science for SponsorBlock, because sponsored segments in videos are the devil. Sponsored segments use the old non-tracking advertisement model. They pay publishers practically nothing because they aren't paying for conversions, but for an estimate based on impressions and track record woo. Bad for publishers, bad for advertisers, and bad for content consumers. Everybody loses. I'm well over my lifetime quota of BS from VPNs, MOBAs, and plots of land scams. So many heartbeats lost.
I’m totally fine with outlining targeted advertising. But even classic broadcast stuff poses the dilemma for me.
I have absolutely noticed I miss out some. As an easy example I don’t tend to know about new TV shows or movies that I might like the way I used to. There’s never that serendipity where you were watching the show and all of a sudden a trailer from a movie comes on and you say “What is THAT? I’ve got to see that.”
Maybe some restaurant I like is moving into the area. Maybe some product I used to like is now back on the market. It really can be useful.
Sure the information is still out there and I could seek it out, but I don’t.
On the other hand I do not miss being assaulted with pharmaceutical ads, scam products, junk food ads, whatever the latest McDonald’s toy is, my local car dealerships yelling at me, and so much other trash.
I’ve never figured out how someone could draw a line to allow the useful parts of advertising without the bad parts.
“You’re only allowed to show a picture of your product, say its name, and a five word description of what it’s for”.
Nothing like that is gonna be workable.
Such a hard problem.
People would also be better of without 90% of the ad-driven internet.
This is not such an unusual thing in law, as much as us stem-brained people want legal systems to work like code. The most famous example is determining art vs pornography - "I know it when I see it" (https://en.wikipedia.org/wiki/I_know_it_when_I_see_it)
Not, at least, until our machine overlords arrive.
The issue is: If you do a precise wording of what you don't want a lawyer will go through it wird by word and the company finds a way to build something which violated the spirit, but not the exact wording. By being more generic in the wording they can reach such cases and future development with very little oversight for later corrections and courts can interpret the intention and current state of art.
There are areas where law has to be precise (calculation of tax, criteria for criminal offenses, permissions for authorities, ...), but in many cases good laws are just as precise as needed and as flexible as possible.
When rules are vague enough you can pretty much always find a rule someone is 'breaking' depending on how you argue it.
It's why countries don't just have a single law that says "don't be evil".
<https://en.wikipedia.org/wiki/Precedent>
The equivalent doctrine under a civil legal system (most of mainland Europe) is jurisprudence constante, in which "if a court has adjudicated a consistent line of cases that arrive at the same holdings using sound reasoning, then the previous decisions are highly persuasive but not controlling on issues of law" (from above Wikipedia link). See:
<https://en.wikipedia.org/wiki/Jurisprudence_constante>
Interestingly, neither the principle of Judicial Review (in which laws may be voided by US courts) or stare decisis are grounded in either the US Constitution or specific legislation. The first emerged from Marbury v. Madison (1803), heard by the US Supreme Court (<https://en.wikipedia.org/wiki/Marbury_v._Madison>), and the second is simply grounded in legal tradition, though dating to the British legal system. Both could be voided, possibly through legislation, definitely by Constitutional amendment. Or through further legal decisions by the courts themselves.
This is different, it is intentionally ambiguous precisely so bureaucrats get to choose winners and losers instead of consumers.
I'm not saying legislation is a good solution but you seem to be making a poetic plea that benefits the abusers.
Only if you believe everyone else has no agency of their own. I think most people outgrow these things once they have something more interesting in their lives. Or once they're just bored.
Back when this thing was new, everyone was posting pictures of every food item they try, every place they've been to etc.. that seems to slowly change to now where there are a lot more passive consumers compared to a few polished producers.
If you're calling people delivering the content "abusers", what would you call people creating the content for the same machine?
But I do believe we overestimate our own agency. Or more importantly society is often structured on the assumption that we have more agency then we actually do.
And companies should not be allowed to predate on the vulnerable.
If a company chooses a design and it can be proved through a subpoena of their communications that the design was intended and chosen for its addictive traits, even if there has been no evidence collected for the addictiveness, then the company (or person) can be deemed to have created a design in bad faith to society and penalized for it.
(Well that's my attempt. I tried to apply "innocent until proven guilty" here.)
There is obviously a lot of detail to work out here-- which specific question do you ask users, who administers the survey, what function do you use scale the fines, etc. But this would force the companies to pay for the addiction externality without prescribing any specific feature changes they'd need to make.
If the EU passes a law that seems general but start giving out specific examples ahead of time, they’re outlawing those specific examples. That’s how they work, even if you read the law closely and comply with the letter of the law. And they’ll take a percentage of your global revenue while people shout “malicious compliance” in the virtual streets if they don’t get their way.
No more for profit nets. Time for civil digital infrastructure.
I'd make the algorithms transparent, then attack clearly unethical methods on a case by case basis. The big thing about facebook in the 2010's was how we weren't aware of how deep its tracking was. When revealed and delved into, it lead to GDRP.
I feel that's the only precision method of keeping thins ethical.
Only allowing algorithmic feeds/recommendations on dedicated subpages to which the user has to navigate, and which are not allowed to integrate viewing the content would be an excellent start IMO.
That actually makes me think that any page containing addictive design elements should, similar to cigarette warning, carry a blinking, geocities style, header or footer with "WARNING: Ophthalmologist General and Narcologist General warn about dangers of addictive elements on this page".
Not necessarily. The consequences of a few bad micro-niche ones would be, well, micro.
[1] Eg: printables.com (for open source, 3D print files)
Although there is a special place in hell for those who put a website options for customer care at the bottom of an infinite scrolling page...
That's exactly why you don't write legislation to ban infinite scroll but 'addictive' design. Then it's ultimately up to the courts to decide, and they have the necessary leeway to judge that journalctl -f isn't addictive but TikTok is, even if they both use a version of infinite scroll.
i do get the idea though. abusive infinate scroll games/exploits, the compulsion to "finish" the feed.
These laws are harsh... but, as much as I hate to say it, the impact social media has had on the world has been worse.
Laws are supposed to be just that — predictable, enforceable, and obeyable rules, like the laws of physics or biology.
Bad laws are vague and subjective. It may be impossible to remove all ambiguity, but lawmakers should strive to create clear and consistent laws for their citizens.
Else it is not a nation of laws, but a domain of dictators.
Like most famous EU laws, this is not a law for people. Like the Banking regulations, the DMA, the GPDR, the AI act, this law cannot be used by individuals to achieve their rights against companies and certainly not against EU states, who have repeatedly shown willingness to use AI against individuals, including face recognition (which gets a lot of negative attention and strict rules in the AI act, and EU member states get to ignore both directly, and they get to allow companies to ignore the rules), violate GPDR against their own citizens (e.g. use medical data in divorce cases, or even tax debt collection, and they let private companies ignore the rules for government purposes (e.g. hospitals can be forced report if you paid for treatment rather than pay alimony, rather than pay your back taxes)). The first application of the GPDR was to remove links about Barrosso's personal history from Google.
These laws can only be used by the EU commission against specific companies. Here's how the process works: someone "files a complaint", which is an email to the EU commission (not a complaint in the legal sense, no involvement of prosecutors, or judges, or any part of the justice system of any member state at all). Then an EU commissioner starts a negotiation process and rules on the case, usually imposing billions of euros in fines or providing publicly-backed loans (in the case of banks). The vast, vast, vast majority of these complaints are ignored or "settled in love" (French legal term: the idea is that some commission bureaucrat contacts the company and "arranges things", never involving any kind of enforcement mechanism). Then they become chairman of Goldman Sachs (oops, that just happened once, giving Goldman Sachs it's first communist chairman, yes really. In case you're wondering: Barrosso), or join Uber's and Salesforce's executive teams, paid through Panama paper companies.
In other words: these laws are not at all about addictive design, and saving you from it, they're about going after specific companies for political means. Google, Facebook, Goldman Sachs, ...
Ironically the EU is doing exactly what Trump did with tariffs. It's just that Trump is using a sawed-off shotgun where the EU commission is using a scalpel.
Addictive designs and social media have changed a lot in the last 10 years, for one. But more importantly, there's no statute of limitation on making laws.
Of course the GDPR gives individuals rights, counter example:
> The first application of the GPDR was to remove links about Barrosso's personal history from Google.
The fact that all of these companies aren't European certainly doesn't help, but if you think this and GDPR, DMA etc. are purely schemes to milk foreign companies then you've been drinking way too much cynicism juice.
In the UK at least, the GDPR was incorporated into UK law (where it remains, essentially unmodified, even after Brexit). So it is certainly not necessary to get the EU commission involved to enforce the law. In the UK, the ICO is the relevant regulator. There are other national regulators that enforce the GDPR, such as the French CNIL.
The EU realized they can extort the US big tech. The EU will now just focus on laws and taxing (the war in Ukraine isn't their problem). And frankly, we should just ignore EU laws in the US.
Companies that try to do business in the EU have to follow EU laws because the EU has something that can be used as leverage to make them comply. But if a US company doesn't have any EU presence, there's no need to obey EU laws.
I think you are projecting values on entities that don't share those values. I don't think they'd have any problem destroying a pile of companies and not enabling replacements; they are not pro-business, and they have not shown a history of regulating in a fashion that's particularly designed to enable home-grown EU businesses. Predictability and consistency of enforcement are not their values, either. They don't seem to have any problem saying "act in what we think the spirit of the law is, and if you think you can just understand and follow the letter of it we'll hurt you until you stop".
Wikitionary (2026)
Noun
vibe (plural vibes)
1. (informal, originally New Age jargon, often in the plural) An atmosphere or aura felt to belong to a person, place or thing. [c. 1960s]The amount of paid shills opposing this is a good indicator that it's the right move.
I wonder if we'll get speakeasies where people can get endogenous dopamine kicks from experiencing dark patterns?
I'm not saying social media isn't cancerous and shouldn't be regulated, because it is and it should, I'm saying that in this specific case it's a symptom of a much bigger existing disease and not the root cause of it.
What I'm mostly afraid of now, is that the lesson governments took from this is not that social media should be regulated and defanged of data collection and addictiveness, but instead that governments should keep and seize control of said data collection and addictiveness so they can weaponize it themselves to advance their agendas over the population.
Case in point, the now US-controlled tiktok does more data harvesting than when it was Chinese owned.[1] At least China couldn't send ICE to your house using that data.
[1] https://www.cbsnews.com/news/tiktok-new-terms-of-service-pri...
Actually both can be true.
So blaming of tiktok is a convenient scapegoat for Romania's corrupt establishment to legitimize themselves and deflect their unpopularity as if it's caused by Russian interference and not their own actions. NO, Russian interference just weaponized the massive unpopularity they already had.
So here's a wild idea on how to protect your democracy: how about instead of banning social media, politicians actually get off their kiddie fiddling islands, stop stealing everything not nailed to the ground and do right by their people, so that the voters don't feel compelled to pour gasoline on their country and light it on fire out of spite just to watch the establishment burn with it.
Because when people are educated, healthy, financially well off and taken care of by their government who acts in their best interest, then no amount of foreign social media propaganda can convince people to throw that all away on a dime. But if your people are their wits end and want to see you guillotined, then that negative capital can and will be exploited by foreign adversaries. Like how come you don't see Swiss or Norwegians trying to vote Russian puppets off TikTok to power and it's not because they have more control on social media than Romania.
This isn't a Romanian problem BTW, many western countries see similar political disenfranchisement today, and why you see western leaders rushing to ban or seize control of social media and free speech, instead of actually fixing their countries according to the pains of the voters.
They use a two-round system to elect their President that works like this:
1. If a candidates gets more than 50% in the first round they are the winner, and there is no second round.
2. If there is no clear winner in the first round, the top two from the first round advance to the second round to determine the winner.
In that election there were 14 candidates. 6 from right-wing parties, 4 from left-wing parties, and 4 independents. The most anyone got in the first round was 22.94%, and the second most was 19.18%. Third was 19.15%. Fourth was 13.86%, then 8.79%.
With that many candidates, and with there being quite a lot of overlap in the positions of the candidates closer to the center, you can easily end up with the candidates that are more extreme finishing higher because they have fewer overlap on positions with the others, and so the voters that find those issues most important don't get split.
You can easily end up with two candidates in the runoff that a large majority disagree with on all major issues.
They really need to be using something like ranked choice.
Firstly, there's many forms of elections, each with their own pros and cons, but I don't think the voting method is the core problem here.
Let's assume Norway would have the exact same system and parties like Romania. Do you think Norwegians would have been swayed by a an online ad campaign to vote a Russian puppet off tiktok to the last round?
Maybe the education level, standard of living of the population and being a high trust society, is actually what filters malicious candidates, and not some magic election method.
Secondly, what if that faulty election system, is a actually a feature and not a bug, inserted since the formation of modern Romania after the 1989 revolution, when the people from the (former) commies and securitatea(intelligence services and secret police) now still running the country but under different org names and flags, had to patch up a new constitution virtually overnight, so they made sure to create a new one where they themselves and their parties have an easier time gaming the system in their favor to always end up on top in the new democratic system, but now that backdoor is being exploited by foreign actors.
> Maybe the education level, standard of living of the population and being a high trust society, is actually what filters malicious candidates, and not some magic election method.
My point isn't about filtering malicious candidates. My point is that a "top two advance to runoff if no one wins the first round" system often does a poor job in the face of a plethora of candidates of picking a winner with majority support.
Yes, there are many forms of elections each with their own pros and cons, and that is one of the main cons of that system (and of one round systems where the winner is whoever gets the most votes even if it is not a majority).
Consider an election with 11 candidates and where there is one particular issue X that 80% of the voters go one way on and 20% the other way. The voters will only vote for a candidate that goes their way on X. 9 of the candidates go the same was as 80% of the voters, and the other 2 go the other way. All the candidates differ on many non-X issues but voters don't feel strongly on those. They will pick a candidate that agrees with them on as many of those as they can, but would be OK with a winner that disagrees with them on the non-X issues as long as they agree on X. This results in the vote being pretty evenly split among the candidates that agree on X.
The 9 candidates that agree with the 80% that go one way on X then end up with about 8.9% of the vote each, and the 2 that go the other way end with 10% each. Those two make it to the runoff and wins.
Result: a winner that would lose 80-20 in a head to head matchup against any of the 9 who were eliminated in the first round.
Note I didn't say that the 2 on the 20% side of issue X were malicious. They just held a position on that issue the 80% disagree with.
Such a system is also more vulnerable to manipulation like what happened with TikTok in Romania, because with a large field of candidates with roughly similar positions you might not need to persuade a large number of people to vote for an extreme candidate to get that candidate into the runoff.
You'd be technically true but your missing 99.9% of the point, you can't dilute these complex topics in such dumb ways and use it as an argument
Or you could just shut the phone off and/or not install the app. It's a simple solution, really, and one that is available at your disposal today at no cost.
We know plenty of things are quite bad for us, and yet we find them difficult to stop. Somewhat famously difficult to stop.
I think telling people, "just don't..." trivializes how difficult that is.
The amount of people in here right now clamoring for legislation to keep them away from electronics which they themselves purchased is mind-bogglingly insane.
The world is complicated. People's lives are complicated (and often meditated by their phones). People's emotional and social wellbeing is complicated, and simply ghosting all your social groups on a random Tuesday is likely to cause significant problems.
If basically everyone who takes it for a while gets addicted and dies of course it should be forbidden.
So I would argue that cigaretts should not be allowed but we could discuss cocaine.
Social media addiction is a mental illness worthy of public mockery. Imagine if alcoholism could be cured by putting your phone in a drawer.
Next time I see a guy in a doorway with a needle sticking out of his arm I'll be sure to tell him, "I know how you feel man, I can't stop scrolling through Instagram. Sometimes, if I'm lucky, a girl will DM me her boobs. It's tough, these addictions."
Enough with the melodrama. Grow up.
Shesh, maybe we should start fining individual developers too if companies aren't able to do it themselves.
Laws are not created to be malleable about the population's trivial mental illnesses.
We don't need new laws on the books because some people are incapable of turning their phones off. They have addictive personalities and will fulfill this by other means, while everyone high-fives claiming success.
I'm proud of you that you are as disconnected as you are. I'm the same -- ditched my addictive social media accounts back in like 2011 -- but not everyone is like us.
There will never be anything close to uniformity, so we must decide if we cripple freedom to protect the weak while increasing bureaucracy and authoritarianism, or allow natural selection to take its course while improving treatment of symptoms.
I'm empathetic to the struggle of addiction, which is a real and terrible thing, but I don't think we should create vague nanny laws as a solution. Even if you're an addict, personal responsibility is still a thing.
I have a feeling natural selection will take its course at the level of nations, with nations that do protect their weak surviving and the ones that let profit extractors exploit and abuse theirs dying off.
This is an exaggeration intended to provoke.
>allow natural selection to take its course
This is hideous.
>I'm empathetic to the struggle of addiction
You are very strongly implying that this is untrue.
Well, we do want to protect the weak (that's a function of society, after all), and I'm totally okay with removing infinite scrolling from social media apps (or "crippling freedom" as you put it). I don't see any significant benefit it provides to individuals or society. Indeed, it has a negative impact on both. So it sounds like a win/win.
Dude, it's 2025.
A few years ago, I accidentally left my phone at home when I went to work, and when I arrived I found that because I no longer had my 2FA device, I couldn't do any work until I went home again and picked it up.
I'm fine without doomscrolling. I've gone from the minimum possible service with internet, to pure PAYG with no internet, and I'm fine with that. But society has moved on, and for a lot of people, phones are no longer an option.
And for a meaningful fraction of people, somehow, I don't get it either, TikTok is the news. Not metaphorically, it's actually where they get news from.
Actually, it's 2026 and has been for six weeks.
> A few years ago, I accidentally left my phone at home when I went to work, and when I arrived I found that because I no longer had my 2FA device, I couldn't do any work until I went home again and picked it up.
Sounds like a personal problem. There are many other 2FA authenticators available. Yubikey, TOTP tokens, smart cards, etc. Using a smartphone (which can lose power at any time) for critical authentication was a silly idea to begin with. I would refuse anything work-related on my personal phone.
D'oh. But fair.
> There are many other 2FA authenticators available.
Specified by job, so no choice in this matter.
> I would refuse anything work-related on my personal phone.
Quite reasonable as a general rule, though my then-employer only required the 2FA app and nothing else, and in this case it would've just meant "get an additional phone".
I suspect the next thing you're going to say is along the lines of "then just switch jobs", though.
I mean even that might not work out. We just switched to MS Teams last year and Microsoft uses a push-based app, not TOTP or other offline keys like we'd used before. And Teams just seems to be getting more popular...
Which of them are available depends on what your company has configured.
If the push version is configured, it's possible it has also installed an MDM profile on your device. Avoid that, or your phone will get wiped when you leave the company in the future.
What a wonderful privileged position you hold. If only everyone could afford to tell their employer to pound sand in the same heroic manner you have undertaken.
So brave.
We have been learning how to induce certain experiences, which correspond to certain substances, for a long time; we're getting more competent at it; this includes social media A/B testing itself to be so sticky that a lot of people find it hard to put down; this is bad, so something* is being done about it.
* The risk being "something should be done; this is something, therefore it should be done"
It's as idiotic a statement as saying "Just stop smoking" around the time when big tobacco was lobbying politicians and bribing scientists and doctors to straight up lie about the deleterious effects of tobacco. It's engineered in such a way as to make it basically impossible for a large swathe of the population to "just not use" the apps.
This learned (or lobbied) helplessness of never changing any laws and we are just stuck with this way of life is silly.
They are trying to block a harmless mechanism, that has proven to be addictive, and that companies have willfully exploited for this very reason, proceeding to wreak havoc to various facets of society while concentrating never before seen levels of wealth in the process.
Wealth that in many case makes them more powerful than the government that should regulate them, which in many cases drank the kool-aid of self-policing these companies have gleefully distributed and lobbied for for years. So, enough with this fine principled arguments about slippery slope that don't exist. What is your comment good for, if not for maintaining a status quo that makes these companies even reacher at the expense of everyone?
They're not alone in this by any means, America has also opened their doors for all forms of gambling like Kalshi which now even sponsors news networks of all things.
The EU has this disconnect with the things they push, which makes sense considering their size and the speed at which it moves. One example that comes to mind is how they're both pushing for more privacy online while also pushing for things such as chat control which is antithetical to privacy.
Does social media need regulating? Yeah. Is infinite scrolling where they should be focusing? Probably not, there's more important aspects that should be tackled and are seemingly ignored.
So it should be possible to regulate it.
I guess we don’t let people have hard drugs even if sometimes they just need to escape their painful life. And maybe this could fall under that logic. But we do let people drink themselves, which serves the same purpose. And if I had to choose, I think doomscrolling is more at the level of Drinking, and less at the level of Heroin. So I would actually be fine with an age limit for doomscrolling after which, you have a hands off approach.
Disclaimer: I anal and this is not legal advice.
Basically, the law created enough fear among the lawyers that software developers are being advised to include the cookie banner in cases where it isn't strictly needed.
You'd have much better retention rates if you don't cover up the content the viewer is trying to view.
How would you like it if I shoved a banner in your face the moment you walked into a store and forced you to punch a hole in it in order to view items on the shelves?
So uh, don't do that.
You don't need to notify if you use cookies for required functionality like login sessions or remembering a functional setting.
If you're tracking whether they're returning or not your activity is exactly the kind of behaviour the rule is covering because, in legal terms, it's skeezy as fuck.
If your legal team genuinely suggests that, it's likely your company uses the login cookies for some additional purposes.
Nobody wants to be the EU test case on precisely how "required functionality" is defined. Regardless of what the plaintext of the law says, it should be self-evident that companies will be more conservative than that, especially when the cost is as low as adding one cooke banner and tracking one preference.
https://github.blog/news-insights/company-news/no-cookie-for...
Go to that link, these are the cookies it writes (at least for me):
* _ga
* _gcl_au
* octo
* ai_session
* cfz_adobe
* cfz_google-analytics_v4
* GHCC
* kndctr_
*_AdobeOrg_identity
* MicrosoftApplicationsTelemtryDeviceId
* OptanonConsent
* zaraz-consent
Some are from github.blog, some are from the cloudflare.com hosting. Not sure how the laws apply to that. But obviously there's several analytics cookies.I think in the past you still needed some info box in the corner with a link to the data policy. But I think that isn't needed anymore (to be clear not a consent dialog, a informational only thing). Also you can without additional consent store a same site/domain cookie remembering you dismissing or clicking on it and not showing it again (btw. same for opting out of being tracked).
But there are some old pre-GDPR laws in some countries (not EU wide AFIK) which do require actual cookie banners (in difference to GDPR consent dialogs or informational things). EU want them removed, but politic moves slow AF so not sure what the sate of this is.
So yes without checking if all the older misguided laws have been dismissed, you probably should have a small banner at the bottom telling people "we don't track you but for ... reasons .. [link] [ok]" even if you don't track people :(. But also if they haven't gotten dismissed they should be dismissed very soon.
Still such a banner is non obnoxious, little annoying (on PC, Tablet, a bit more annoying on Phone). And isn't that harass people to allow you to spy on them nonsense we have everywhere.
Having the EU decide on a technical implementation is more of a last ditch effort, like what happened with more than a decade of the EU telling the industry to get its shit together and unify under a common charging port.
1. GDPR consent dialogs are not cookie popups, most things you see are GDPR consent dialogs
2. GDPR consent dialogs are only required if you share data, i.e. spy on the user
3. GDPR had from the get to go a bunch of exceptions, e.g. you don't need permission to store a same site cookie indicating that you opted out of tracking _iff_ you don't use it for tracking. Same for a lot of other things where the data is needed for operation as long as the data is only used with that thing and not given away. (E.g. DDOS protection, bot detection, etc.)
4. You still had to inform the user but this doesn't need any user interacting, accepting anything nor does it need to be a popup blocking the view. A small information in the corner of the screen with a link to the data policy is good enough. But only if all what you do falls under 3. or non personal information. Furthermore I think they recently have updated it to not even require that, just having a privacy policy in a well know place is good enough but I have to double check. (And to be clear this is for data you don't need permission to collect, but like any data you collect it's strictly use case bound and you still have to list how its used, how long stored etc. even if you don't need permissions). Also to be clear if you accept the base premise of GDPR it's pretty intuitive to judge if it's an exception or not.
5. in some countries, there are highly misguided "cookie popup" laws predating GDPR (they are actually about cookies, not data collection in general). This are national laws and such the EU would prefer to have removed. Work on it is in process but takes way to long. I'm also not fully sure about the sate of that. So in that context, yes they should and want to kill "cookie popups". That just doesn't mean what most people think it does (as it has nothing to do with GDPR).
Sadly whenever this kind of discussion pops up it's usually a very unpopular take.
Most sites didn't need a banner. Even post-GDPR, many use-cases don't need one.
I'm interested to see what measures people will use to get around the increasingly bizarre restrictions. Perhaps an official browser extension for each platform that reimplements bureaucrat-banned features?
Whatever happened to freedom?
IMO it's a feature that's not valuable enough to justify the fact that it contributes to poor quality of life for people who can't put it down.
Because it is a dangerous addiction [1] with recognised adverse effects on human health. Like sugar, tobacco, or drugs.
You reduce sugar intake, not eliminate it.
You eliminate cocaine intake, not just reduce it.
Treating social media design as equal to something that can kill people in excess unnerves me.
As it should, because there's a really obvious "slippery slope" argument right there.
But… it can kill people.
There is a certain fraction of the population who, for whatever reason, can be manipulated, to the point of becoming killers or of causing injury to themselves. Social media… actually, worse than that, all A/B testing everywhere, can stumble upon this even when it isn't trying to (I would like to believe that OpenAI's experience with 4o-induced psychosis was unintentional).
When we know which tools can be used for manipulation, it's bad to keep allowing it to run unchecked. Unchecked, they are the tool of propagandists.
But… I see that slippery slope, I know that any government which successfully argues itself the power to regulate this, even for good, is one bad election away from a dictatorship that will abuse the same reasoning and powers to evil ends.
There's also a very good TED talk on this topic from 8 years ago: https://www.youtube.com/watch?v=iFTWM7HV2UI
Why would you not be willing to include "scrolling" as another form of addiction? Just because it's labeled the same way you yourself are demonstrating that we handle that in different ways.
Social Media is being treated as "sugar" in this instance instead of as "cocaine".
(As I get older, unironically. I want my productive worker bees to be drug free, addiction free, enjoying simple pleasures that do not put me at risk. They pay Social Security. Everything is nice and safe. Freedom? Yeah no thanks, get to work and pay your taxes.)
You think that attacking these horrible companies is bad for our freedoms, we think our freedoms are fine with it.
Freedom from, or freedom to?
‘Freedom does not consist in doing what we want, but in overcoming what we have for an open future; the existence of others defines my situation and is the condition of my freedom. They oppress me if they take me to prison, but they are not oppressing me if they prevent me from taking my neighbour to prison.’ -- Simone de BeauvoirWhy would anyone publicly express any negative opinion about the effects of doomscrolling? I don't think I'm uncharitably paraphrasing, right?
Sarcasm now, but maybe what the near future will look like...
More to the point: this is indeed a massive overreach with the Commission being the police, judge, jury, and executioner... what could go wrong? Exactly what we are seeing is taking shape, precedent by precedent.
Turns out it was a big lie you've told yourself so you can let the rich and powerful get away with atrocities.
Hey, we all have free speech, it's just that I can buy a whole lot more of it than you can.
They avoid to mention the rest of social media platforms, which happen to be US based. It seems they choose a single quick and easy China-based target more like an experiment to decide for the rest. The key point is when: either the current kids will experience it or those that are not yet born.
ISO 12100 (Safety of Machinery): Sets general, fundamental principles for design, risk assessment, and reduction (Type A standard).
ISO 13849-1 (Safety-Related Parts of Control Systems): Defines performance levels and categories for safety-related components (Type B standard).
ISO 13850 Safety of machinery – Emergency stop function – Principles for design
And that's just some of them.
Genuinely curious about the actual data on this.
Does anyone have a link to a reputable, sizable study?
I'm curious how they plan to pretend to enforce this. Will you need a loisence to implement infinite scroll?
Though if it applies to the YouTube, seems annoying when trying to find a video to watch. I usually trigger a few infinite scrolling loads to look for videos.
And I assume they'd have to specify a maximum number of items per page, or else devs could just load a huge number of items up front which would technically not be infinite scrolling but enough content to keep someone occupied for a long time.
Wondering about a technical solution I couldn't find anything besides fold out explanations and links to explain jargon. Neither would really bridge the gap.
One obvious theory was to keep track of what the user knows and hide things they don't need or unhide things they do. This is of course was not acceptable from a privacy perspective.
Today however you could forge a curriculum for countless topics and [artificially] promote a great diversity of entry level videos. If the user is into something they can be made to watch more entry level videos until they are ready for slightly more advanced things. You can reward creators for filling gaps between novice and expert level regardless of view count.
Almost like Khan academy but much slower, more playful and less linear.
Imagine programming videos that assume the reader knows everything about each and every tool involved. The algorithm could seek out the missing parts and feed them directly into your addiction or put bounties on the scope.
You can have a ranked paginated UI. You can also have an "infinite" (until you run out of items, but this is not different for ranked) chronological UI.
It’s of no value to point out both can technically be implemented independently. That isn’t what happened, and even if it did it would still be user hostile.
The sorting algorithm that they choose isn't what makes a UI infinite scrolling or not, they're completely orthogonal. In MVC architecture terms they're model and view respectively...
Trackers have much more effective techniques than "cookies", kids trivially bypass verification, and designers will make a joke of tell me you have infinite scrolling without telling me you have infinite scrolling. When you are facing trillions of dollars of competition to your law, what do you think is going to happen?
Maybe if there was an independent commission that had the authority to rapidly investigate and punish (i.e. within weeks) big tech for attempting engagement engineering practices it might actually have some effect. But trying to mandate end user interfaces is wasting everyone's time putting lipstick on a pig.
All just to remove navigation clicks no one minded and reduce server loads, in exchange for users suffering laggy lazy loading (or, what a hate-inducing pattern!) inability to preload, print, search or link.
> We use cookies and other technologies to store and access personal data on your device
Evidently you don't value privacy.
Feeds should be heavily regulated, effectively they are a (personalized!) broadcast, and maybe the same strictures should apply. Definitely they should be transparent (e.g. chronological from subscribed topics), and things like veering more extreme in order to drive engagement should be outlawed.
>"Social media app TikTok has been accused of purposefully designing its app to be “addictive” by the European Commission, citing its infinite scroll, autoplay, push notification, and recommendation features."
All of these have immediate and easy replacements or workarounds. Nothing will substantially change (for the better; maybe it does for the worse, even).
Moreover, "purposefully designing something to be addictive" (and cheap to make) is the fundamental basis of late stage capitalism.
hopefully AI will wake them up and save us from all this nonsense
You can buy as much freedom as you want there.
Von der Leyen, who illegally deleted her SMS and is being investigated for corruption, conflict of interest and destruction of evidence, must be glad she can count on you to defend spying on every citizen via "Chat Control" and forcing browser developers to accept any state-mandated root certificates via eIDAS.
This isn’t about addiction, it’s about censorship. If you limit the amount of time someone can spend getting information, and make it inconvenient with UI changes, it’s much harder to have embarrassing information spread to the masses.
Amazingly, the public will generally nod along anyway when they read governmental press releases and say “yes, yes, it’s for my safety.”
They talk about how great Europe is, how they like their 1-2 hour coffee/smoke breaks... These kind of moves give me that same vibe.
But why are so many Europeans trying to move to the US? Why isn't the opposite happening?
My hypothesis is that these kind of popular policies are short sighted. They are super popular, they use intuition and feeling. But maybe there is something missing. The unadulterated freedom has led people to enjoy these platforms. Obviously it affects the economy. So much so, even the US military has moved from Europe to Asia.
I don't typically like fiction, but it seems "I, Robot" was spot on about Europe. (Maybe mistaking new Africa for Asia)
Curious where you got your statistics?
If anything it’s probably the opposite, with more Americans wanting to move to Europe than the reverse.
Citation needed.
I took some minutes to try and find statistics, and also ChatGPT claims that the EU simply doesn't collect or publish that kind of data, so I'm wondering how you think you know.
All I see in my circle is people refusing to even go on vacation in the US, let alone move there.
I do know people who've gone, only on vacation and they were exactly the sort of unthreatening rich white folks that you'd expect to have least trouble. Oh, and some US citizens who went "home" to see family at Xmas but work here.
Like, a significant fraction of the country level of usage. You don't need to worry about the EU coming and taking away your HN client APK. You do need to be worried about Google doing that, though.