> Every time you open LinkedIn in a Chrome-based browser, LinkedIn’s JavaScript executes a silent scan of your installed browser extensions. The scan probes for thousands of specific extensions by ID, collects the results, encrypts them, and transmits them to LinkedIn’s servers.
This does seem invasive. It also seems like what I’d expect to find in modern browser fingerprinting code. I’m not deeply familiar with what APIs are available for detecting extensions, but the fact that it scans for specific extensions sounds more like a product of an API limitation (i.e. no available getAllExtensions() or somesuch) vs. something inherently sinister (e.g. “they’re checking to see if you’re a Muslim”).
I’m certainly not endorsing it, do think it’s pretty problematic, and I’m glad it’s getting some visibility. But I do take some issue with the alarmist framing of what’s going on.
I’ve come to mostly expect this behavior from most websites that run advertising code and this is why I run ad blockers.
Calling the title misleading because they didn't breach the browser sandbox is wrong when this is clearly a scenario most people didn't think was possible. Chrome added extensionId randomization with the change to V3, so it's clearly not an intended scenario.
> vs. something inherently sinister (e.g. “they’re checking to see if you’re a Muslim”)
They chose to put that particular extension in their target list, how is it not sinister? If the list had only extensions to affect LinkedIn page directly (a good chunk seem to be LinkedIn productivity tools) they would have some plausible deniability, but that's not the case. You're just "nothing ever happens"ing this.
I think most people would interpret “scanning your computer” as breaking out of the confines the browser and gathering information from the computer itself. If this was happening, the magnitude of the scandal would be hard to overstate.
But this is not happening. What actually is happening is still a problem. But the hyperbole undermines what they’re trying to communicate and this is why I objected to the title.
> They chose to put that particular extension in their target list, how is it not sinister?
Alongside thousands of other extensions. If they were scanning for a dozen things and this was one of them, I’d tend to agree with you. But this sounds more like they enumerated known extension IDs for a large number of extensions because getting all installed extensions isn’t possible.
If we step back for a moment and ask the question: “I’ve been tasked with building a unique fingerprint capability to combat (bots/scrapers/known bad actors, etc), how would I leverage installed extensions as part of that fingerprint?”
What the article describes sounds like what many devs would land on given the browser APIs available.
To reiterate, at no point am I saying this is good or acceptable. I think there’s a massive privacy problem in the tech industry that needs to be addressed.
But the authors have chosen to frame this in language that is hyperbolic and alarmist, and in doing so I thing they’re making people focus on the wrong things and actually obscuring the severity of the problem, which is certainly not limited to LinkedIn.
> To reiterate, at no point am I saying this is good or acceptable. I think there’s a massive privacy problem in the tech industry that needs to be addressed.
These two sentences highlight the underlying problem: Developers without an ethical backbone, or who are powerless to push back on unethical projects. What the article describes should not be "what many devs would land on" naturally. What many devs should land on is "scanning the user's browser in order to try to fingerprint him without consent is wrong and we cannot do it."
To put it more extreme: If a developer's boss said "We need to build software for a drone that will autonomously fly around and kill infants," The developer's natural reaction should not be: "OK, interesting problem. First we'll need a source of map data, and vision algorithm that identifies infants...." Yet, our industry is full of this "OK, interesting technology!" attitude.
Unfortunately, for every developer who is willing to draw the line on ethical grounds, there's another developer waiting in the recruiting pipeline more than willing to throw away "doing the right thing" if it lands him a six figure salary.
One reason your boss is eager to replace everyone with language models, they won’t have any “ethical backbone” :’)
Same with LLMs. This is a race. Competent people are in demand.
Fighting against these kinds of directives was a large factor in my own major burnout and ultimately quitting big tech. I was successful for awhile, but it takes a serious toll if you’re an IC constantly fighting against directors and VPs just concerned about solving some perceived business problem regardless of the technical barriers.
Part of the problem is that these projects often address a legitimate issue that has no “good” solution, and that makes pushing back/saying no very difficult if you don’t have enough standing within the company or aren’t willing to put your career on the line.
I’d be willing to bet good money that this LinkedIn thing was framed as an anti-bot/anti-abuse initiative. And those are real issues.
But too many people fail to consider the broader implications of the requested technical implementation.
Edit: typos
I think using LinkedIn is pretty much agreeing to participate in “fingerprinting” (essentially identifying yourself) to that system. There might be a blurry line somewhere around “I was just visiting a page hosted on LinkedIn.com and was not myself browsing anyone else’s personal information”, but otherwise LinkedIn exists as a social network/credit bureau-type system. I’m not sure how we navigate this need to have our privacy while simultaneously needing to establish our priors to others, which requires sharing information about ourselves. The ethics here is not black and white.
Does something like running the duckduckgo extension not help?
No, yes
Yes, giving these people short (or long, mēh) prison sentences is the only thing that will stop this.
No, the obvious grassroots response is to not use LinkedIn or Chrome. (You mean developers not consumers, I think. The developers in the trenches should obey if they need their jobs, they are not to blame. It is the evil swine getting the big money and writing the big cheque's...)
That involves integrating with tracking providers to best recognize whether a purchase is being made by a bot or not, whether it matches "Normal" signals for that kind of order, and importantly, whether the credit card is being used by the normal tracking identity that uses it.
Even the GDPR gives us enormous leeway to do literally this, but it requires participating in tracking networks that have what amounts to a total knowledge of purchases and browsing you do on the internet. That's the only way they work at all. And they work very well.
Is it Ethical?
It is a huge portion of the reason why ecommerce is possible, and significantly reduces credit card fraud, and in our specific case, drastically limits the ability of a criminal to profit off of stolen credit cards.
Are people better off from my work? If you do not visit our platforms, you are not tracked by us specifically, but the providers we work with are tracking you all over the web, and definitely not just on ecommerce.
Should this be allowed?
Based on their privacy policy, it looks like Sift (major anti-fraud vendor) collects only "number of plugins" and "plugins hash". No one can accuse them of collecting the plugins for some dual-use purpose beyond fingerprinting, but LinkedIn has opened themselves up to this based on the specific implementation details described.
This includes things like the motion of your mouse pointer, typing events including dwell times, fingerprints. If our providers are scanning the list of extensions you have installed, they aren't sharing that with us. That seems overkill IMO for what they are selling, but their business is spyware so...
On the backend, we generally get the results and some signals. We do not get the massive pack of data they have collected on you. That is the tracking company's prime asset. They sell you conclusions using that data, though most sell you vague signals and you get to make your own conclusions.
Frankly, most of these providers work extremely well.
Sometimes, one of our tracking vendors gets default blackholed by Firefox's anti-tracking policy. I don't know how they manage to "Fix" that but sometimes they do.
Again, to make that clear, I don't care what you think Firefox's incentives are, they objectively are doing things that reduce how tracked you are, and making it harder for these companies to operate and sell their services. Use Firefox.
In terms of "Is there a way to do this while preserving privacy?", it requires very strict regulation about who is allowed to collect what. Lots of data should be collected and forwarded to the payment network, who would have sole legal right to collect and use such data, and would be strictly regulated in how they can use such data, and the way payment networks handle fraud might change. That's the only way to maintain strong credit card fraud prevention in ecommerce, privacy, status quo of use for customers, and generally easy to use ecommerce. It would have the added benefit of essentially banning Google's tracking. It would ban "Fraud prevention as a service" though, except as sold by payment networks.
Is this good? I don't know.
That data sounds like it would be very valuable.
But I think if I sell widgets and a prospective customer browsers my site, telling my competitors (via a data broker) that customer is in the market for widgets is not a smart move.
How do such tracking networks get the cooperation of retailers, when it’s against the retailers interests to have their customers tracked?
In short, you are not going to solve this problem blaming developer ethics. You need regulation. To get the right regulation we need to get rid of PACs and lobbying.
Will you do the same for your kids ? WOuld you let the government decide for you whats right, and what's wrong ?
That is the deal in a state based society. There are alternatives, but are you ready for Council Communism and it's ilk?
> WOuld you let the government decide for you whats right, and what's wrong ?
Yes, in a state based society
In a state based society fight for democracy and civil rights. Freedom must be defended
That is exactly how I interpreted it, and that is why I clicked the link. When I skimmed the article and realized that wasn't the case, I immediately thought "Ugh, clickbait" and came to the HN comments section.
> To reiterate, at no point am I saying this is good or acceptable. I think there’s a massive privacy problem in the tech industry that needs to be addressed.
100% Agree.
So, in summary: what they are doing is awful. Yes, they are collecting a ton of data about you. But, when you post with a headline that makes me think they are scouring my hard drive for data about me... and I realize that's not the case... your credibility suffers.
Also, I think the article would be better served by pointing out that LinkedIn is BY FAR not the only company doing this...
I don't care about how much spying is going on in ESPN. I can ditch it at the shadow of a suspicion. Not so with LinkedIn.
This is very alarming, and pretending it's not because everyone else does it sounds disingenuous to me.
Like everyone else on this thread, I’m not condoning it or saying it’s a good thing, but this post is an exaggeration.
For myself, I agree with you: one should quit (and I will)
Yes, but I also think that most people would interpret "Getting a full list of all the Chrome extensions you have installed" as a meaningful escape/violation of the browser's privacy sandbox. The fact that there's no getAllExtensions API is deliberate. The fact that you can work around this with scanning for extension IDs is not something most people know about, and the Chrome developers patched it when it became common. So I don't think describing it as something everybody would expect is totally fine and normal for browsers to allow is correct.
I don't think so, because most people understand that extensions necessarily work inside of the sandbox. Accessing your filesystem is a meaningful escape. Accessing extensions means they have identification mechanisms unfortunately exposed inside the sandbox. No escape needed.
It's extremely unfortunate that the sandbox exposes this in some way.
Microsoft should be sued, but browsers should also figure out how to mitigate revealing installed extensions.
In my experience, most people - even most tech people - are unaware of just how much information a bit of script on a website can snag without triggering so much as a mild warning in the browser UI. And tend toward shock and horror on those occasions where they encounter evidence of reality.
The widespread "Facebook is listening to me" belief is my favorite proxy for this ... Because, it sorta is - just... Not in the way folks think. Don't need ears if you see everything!
I think that’s a far more reasonable framing of the issue.
> I don't think describing it as something everybody would expect is totally fine and normal for browsers to allow is correct.
I agree that most people would not expect their extensions to be visible. I agree that browsers shouldn’t allow this. I, and most privacy/security focused people I know have been sounding the alarm about Chrome itself as unsafe if you care about privacy for awhile now.
This is still a drastically different thing than what the title implies.
To take a step back further: what you're saying here is that gathering more data makes it less sinister. The gathering not being targeted is not an excuse for gathering the data in the first place.
It's likely that the 'naive developer tasked with fingerprinting' scenario is close to the reality of how this happened. But that doesn't change the fact that sensitive data -- associated with real identities -- is now in the hands of MS and a slew of other companies, likely illegally.
> But the authors have chosen to frame this in language that is hyperbolic and alarmist, and in doing so I thing they’re making people focus on the wrong things and actually obscuring the severity of the problem, which is certainly not limited to LinkedIn.
The article is not hyperbolizing by exploring the ramifications of this; and it's true that this sort of tracking is going on everywhere, but neither is it alarmist to draw attention to a particularly egregious case. What wrong things does it focus on?
I’m not saying it is. My point is that they appear to be trying to accomplish something like getInstalledExcentions(), which is meaningfully different from a small and targeted list like isInstalled([“Indeed.com”, “DailyBibleVerse”, “ADHD Helper”]).
One could be reasonably interpreted as targeting specific kinds of users. What they’re actually doing to your point looks more like a naive implementation of a fingerprinting strategy that uses installed extensions as one set of indicators.
Both are problematic. I’m not arguing in favor of invasive fingerprinting. But what one might infer about the intent of one vs. the other is quite different, and I think that matters.
Here are two paragraphs that illustrate my point:
> “Microsoft reduces malicious traffic to their websites by employing an anti-bot/anti-abuse system that builds a browser fingerprint consisting of <n> categories of identifiers, including Browser/OS version, installed fonts, screen resolution, installed extensions, etc. and using that fingerprint to ban known offenders. While this approach is effective, it raises major privacy concerns due to the amount of information collected during the fingerprinting process and the risk that this data could be misused to profile users”.
vs.
> “Microsoft secretly scans every user’s computer software to determine if they’re a Christian or Muslim, have learning disabilities, are looking for jobs, are working for a competitor, etc.”
The second paragraph is what the article is effectively communicating, when in reality the first paragraph is almost certainly closer to the truth.
The implications inherent to the first paragraph are still critical and a discussion should be had about them. Collecting that much data is still a major privacy issue and makes it possible for bad things to happen.
But I would maintain that it is hyperbole and alarmism to present the information in the form of the second paragraph. And by calling this alarmism I’m not saying there isn’t a valid alarm to raise. But it’s important not to pull the fire alarm when there’s a tornado inbound.
As I’ve stated clearly throughout this thread, the fingerprinting they’re doing is a problem.
Calling it “searching your computer” is also a problem.
> Defending that action is
Nowhere have I defended what LinkedIn is doing.
But at the end of the day, the browser is likely where your most sensitive data is.
Which they would, if they could.
They are scanning users' computers to the maximum extent possible.
No, LinkedIN has much more sensitive data already. Combined with which the voracious fingerprinting, this stands out as a particularly dystopian instance of surveillance capitalism
If that's all it takes to fool you then its pretty trivial way to hide your true intentions.
>Every time any of LinkedIn’s one billion users visits linkedin.com, hidden code searches their computer for installed software, collects the results, and transmits them to LinkedIn’s servers and to third-party companies including an American-Israeli cybersecurity firm.
When I read that, I think they have escaped the browser and checking which applications I have installed on my computer. Not which plugins the browser has in it. Just my 2cents.
If it has the ability to scan your bookmarks, or visited site history, that would lend more credence to using the term "computer".
The title ought to have said "linkedIn illegally scans your browser", and that would make clear what is being done without being sensationalist.
I’m not defending the act of scanning for these extensions, and I’m of the opinion that such an API shouldn’t even exist, but just pointing out that there are perfectly legitimate APIs that reveal information that could be framed as “files installed on your computer” that are clearly not “searching your computer” like the title implies.
Having sensationalist titles should be called out at every opportunity.
How'd that work? If it's in memory, the extensions would vanish everytime I shutdown Chrome? I'll have to reinstall all my extensions again everytime I restart Chrome?
Have you seen any browser that keeps extension in memory? Where they ask the user to reinstall their extensions everytime they start the browser?
But the language of "your computer" also implies software on your computer including but not limited to Chrome extensions.
Eg, someone could use the phrase "Won't someone think of the children?" to describe a legitimately bad thing like bank fraud, but the solutions that flow from the problem that "children are in danger" are significantly different from the solutions that flow from "phishing attacks are rampant".
The two issues in this case aren't quite as different as child-endangerment and bank fraud. But if the problem was as the original title describes, the solution is quite different (better sandboxing) than what the actual solution is. Which I don't know, but better sandboxing ain't it.
'ignore the facts! ENEMY!!!' generally doesn't end well for anybody
Like OP, I don't consider behavior confined to the browser to be my computer. "Scans your browser" is both technically correct and less misleading. "Scans your computer" was chosen instead, to get more clicks.
all of the browser extensions I'm aware of are on planet earth, so i guess you'd have it linkedin is searching the planet for your browser extensions?
By this logic we could also say that LinkedIn scans your home network.
The same way taking a photo of a house from the street is not the same as investigating the contents of your pantry.
While "scanning your browser" would be more accurate and would exclude the interpretation that it scans your files.
The reason the latter is not used is that, even though more precise and more communicative, it would get less clicks.
Checking for extensions is barely anything when you consider the amount of system data a browser exposes in various APIs, and you can identify someone just by checking what's supported by their hardware, their screen res, what quirks the rendering pipeline has, etc. It's borderline trivial and impossible to avoid if you want a working browser, and if you don't the likes of Anubis will block you from every site cause they'll think you're a VM running scraper bot.
Your browser is a subset of your computer and lives inside a sandbox. Breaching that sandbox is certainly a much more interesting topic than breaking GDPR by browser fingerprinting.
Those profiling tools don't really care which features are going to be used for predictions. It's just machine learning, and it's indiscriminate. So if you have an extension that correlates with you being Muslim, it will be used for whatever ML predictions they give to other companies, and the worst case will be another "oh we didn't do this intentionally".
Of course, that's not the first time this ever happened in human history, so even if it's not "something inherently sinister", it's just "criminal negligence".
Expecting and accepting this kind of thing is why everyone feels the need to run an ad-blocker.
An ad-blocker also isn’t full protection. It’s a cat and mouse game. Novel ideas on how to extract information about you, and influence behavior, will never be handled by ad-blockers until it becomes known. And even then, it’s a question of if it’s worth the dev time for the maker of the ad-blocker you happen to be using and if that filter list gets enabled… and how much of the web enabling it breaks.
The point was more that the headline frames this as some major revelation about LinkedIn, while the reality is that we’re getting probed and profiled by far more sites than most people realize.
They're scanning your extensions to make sure you aren't using third party tools to scrape LinkedIn.
It's stupid, but they're trying to stop people from making money on LinkedIn when they feel like they're the only ones that should be able to do that.
It's pretty wild that we live in a world where the actual FBI has recommended we use ad blockers to protect ourselves, and if everyone actually listened, much of the Internet (and economy) as we know it would disappear. The FBI is like "you should protect yourself from the way that the third largest company in the world does business", and the average person's response is "nah, that would take at least a couple of minutes of my time, I'll just go ahead and continue to suffer with invasive ads and make sure $GOOG keeps going up".
As a data point I, a technical person who tweaks his computer a lot, was against adblocking for moral reasons (as a part of perceived social contract, where internet is free because of ads). Only later I changed mi mind on this because I became more privacy aware.
But ads are all of those things now, so I feel no obligation. I only got an ad blocker around the time ads were becoming excessively irritating.
When I first started out on the internet, ads were banners. Literally just images and a link that you could click on to go see some product. That was just fine.
However, that wasn't good enough for advertisers. They needed animations, they needed sounds, they needed popups, they needed some way to stop the user from just skimming past and ignoring the ad. They wanted an assurance that the user was staring at their ad for a minimum amount of time.
And, to get all those awful annoying capabilities, they needed the ability to run code in the browser. And that is what has opened the floodgate of malware in advertisement.
Take away the ability for ads to be bundled with some executable and they become fine again. Turn them back into just images, even gifs, and all the sudden I'd be much more amenable to leaving my ad blocker off.
Most people, including folks on here, think cookie banners are a problem, but they are just an annoying attempt to phish your agreement. As long as these privacy loopholes exist, we will keep hearing such stories even from large corporations with much to loose, which means the current privacy regulations do not go far enough.
Even back in the 1990s the internet was awash with popups, popunders and animated punch-the-monkey banner ads. And with the speed of dial up, hefty images slows down page loads too.
You must be a true Internet veteran if you remember a time ads weren’t annoying!
I'm not trying to be mean I'm just trying to historically parse your sentence/belief.
Because for me this is a simplified analogy of what happened on the internet:
a) we opened a club house called the internet in the early 1990s, just after the time of BBSs
b) a few years later a new guy called commercial business turned up and started using our club house and fucking around with our stuff
c) commercial business started going around our club house rearranging the furniture and putting graffiti everywhere saying the internet is here and free because of it. We're pretty sure it might have even pissed in the hallway rather than use the toilet and the whole place is smelling awful.
d) the rest of us started breaking out the scrubbing brushes and mops (ad blockers, extensions, VPNs, etc) trying to clean up after it
e) some of its friends turned up and started repeating something about social contracts and how business and ads built this internet place
f) the rest of us keep crying into our hands just trying to meet up, break out the slop buckets to clean up the vomit in the kitchen and some of us now have to wear gloves and condoms just to share things with our friends and stop the whole place collapsing
Quantity is a quality in itself. Your BBS was never going to support a million users. Once people figured out the network effect it was over for the masses. They went where the people are, and we've all suffered since.
"we" is doing a lot of work here. No clubhouse got optical switching working and all that fiber in the ground for example. Beyond POC, the Internet was all commercial interests.
This is sort of like arguing cutlery is a military enterprise. Like yes, that’s where knives came from. But that’s disconnected enough from modern design, governance and other fundamental concerns as to be irrelevant. The internet—and less ambiguously, the World Wide Web—are more commercial than military.
Source? Not doubting. But I have a friend who was buying airline tickets through CompuServe in the late 80s/early 90s.
Such as news and magazine sites, many of which are actively dying due to a lack of revenue.
I personally wish these sites could all switch to paid models, because I also don’t like ads.
But absent that, I’d like to support the sites I use so that they don’t go out of business.
Most publishers of content online are ad supported and struggling, and I want to make sure I’m contributing to their revenue somehow.
I don’t feel bad about blocking ads on sites I pay for though.
PayPal, Spotify, Stripe, LinkedIn, Airbnb, Facebook, ResearchGate, Flexport, Nubank, Rippling, Asana, Luft, Tesla, Microsoft, Apple, SpaceX
You can’t trust anything these days!
If all Android users did this, something would change.
Firefox runs great 99.99% of the time. It’s easy to add extensions. So we should be pushing people to adopt it.
If you're on Android, Firefox supports many full desktop extensions, including uBlock Origin.
Last time I tried firefox on the iphone it was rubbish compared with safari. Same with some ad blocking app I had back in the day
There's also been other adblock apps for a long while, though (adguard comes to mind).
I think people think ads give way, way more money than they actually do. If you're visiting a website with mostly static ads then you're generating fractions of a cent in revenue for that website. Even on YouTube, you're generating mere cents of revenue across all your watch time for the month.
Why does YouTube premium cost, like, 19 dollars a month then? I don't know, your guess is as good as mine.
Point is, you wouldn't be paying 5.99. You could probably pay a dollar or two across ALL the websites you visit and you'd actually be giving them more money than you do today.
I don't want to defend ads, but whatever replaces them is going to be very disruptive. Maybe better, but very different.
- do these people understand the principles of making good products?
- is anyone clearly working towards a microtransaction system that could replace advertising and subscription models?
After attending two conferences, hundreds of conversations and hours spent researching, my conclusion to both questions was no. The community felt more like an ouroboros. It was disappointing.
I don't want to pay NYT a subscription fee, I want to pay them some fraction of a cent per paragraph of article that I load in. Same goes for seconds of video on YouTube, etc.
Apparently I'm alone in this vision, or at least very rare...
I looked at crypto currency because it seems like the obvious naive solution. it doesnt work. the cost of the transaction itself far outweighs the value of the transaction when dealing with fractions of a cent. you want an entire network to be updating ledgers with ~millions of records per ~$1000 moved. the fundamental tech of crypto leans towards slower, higher value transactions than high volume, small transactions. Lots of efforts have been made with some coins to bring down the bar of "high value, low volume" to meet everyday consumer usage rates and values - but a transaction history at the scale of every ad impression for every person is a tough ask and would perpetually be in an uphill battle against energy costs.
Ultimately, the conclusion I came to is that the service would need to be centralized, and likely treated as cash by not keeping track of history. Centralized company creates "web credits", user spends $5 for 10,000 credits, these credits are consumed when they visit websites. Websites collect a few credits from each user, and cash out with the centralized company. The issue is that since it would cost more to track and store all the transactions than the value of the transactions themselves, you have to fully trust the company to properly manage the balances.
I started building it and since I would be handling, exchanging, and storing real currency - it seemed subject to a lot of regulations. It is like a combination bank and casino.
i've thought about finishing the project and using disclaimers that buying credits legally owes the user nothing, and collecting credits legally owes the websites nothing, and operating on a trust system - but any smart person would see the potential for a rug pull on that and i figured there would not be much interest.
The alternative route of adhering to all the banking regulations to get the proper insurances needed to make the commitments necessary to users and websites to guarantee exchange between credits and $ seemed like too much for 1 person to take on as a side project for free
A typical credit is getting paid in, transacted once, and cashed out. And a transaction with a user ID, destination ID, and timestamp only needs 16 bytes to store. So if you want to track every hundredth of a penny individually, then processing a million dollars generates 0.16 terabytes of data. You want to keep that around for five years? Okay, that's around $100 in cost. If you're taking a 1% fee then the storage cost is 1% of your fee.
If your credits are worth 1/20th of a penny, and you store history for 18 months, then that drops the amount of data 17x.
(And any criticisms of these numbers based on database overhead get countered by the fact that you would not store a 10 credit transaction as 10 separate database entries.)
Ads were the path of least resistance, and once entrenched, they effectively prevented any alternative from emerging. Now that we've seen how advertising scales, and how it's ruined our mediascape, we're finally looking at alternatives. Not dissimilar to how we reacted to pollution, once we saw it at scale.
And has roughly 2.7 billion monthly active users. This means the average YouTube user brings in around $1.23 per month. When you consider that CPM's can easily swing by 20X based on how wealthy the user demographic is, and willingness to pay a subscription is a strong signal for purchasing power, I would not be at all subscribed if a YouTube premium subscription was revenue-neutral for Google.
There would need to be a way for ISPs to know which websites are getting my traffic in order to know who to distribute the money to, which I'm not a fan of. But I think something along those lines, with anonymized traffic data, would work a treat.
> distribute to sites I visit, if it meant zero tracking
How would your ISP know to which sites to distribute the money, if there were no tracking?You would have to either self-host your own VPN server somewhere (maybe on a public cloud provider) or if you are truly paranoid, use something like Tor.
The problem is that both the ISP and the websites would then go "Cool, we're getting $10 a month from them!" for about a minute before they started trying to come up with ways to start showing you ads anyways. With the level of customer appreciation ISPs tend to show, I'm sure they'd have no problem ignoring your complaints and would happily revoke your service if you stopped paying the now $10-higher price per month.
people with something to share, people with something to say, who share and say it because they want to
that's how pamphleteers worked, that's how the Internet worked
at scale, static (CMS-managed) information sites cost effectively nothing even for arbitrary amounts of traffic, and smoothed across a range of people sharing stuff, it approaches zero per person
publishing used to be free with your ISP, and edge CDN used to be (and still is) free to a point (an incredibly high volume point) as well
having people pay something nominal to say things instead of pay far too much in attention-distraction or money to consume things, would put this all back the right way round
Also, I agree that the platforms and paradigms we have are fucked up, but do believe that people who put work into making something deserve to charge for it if there are folks who’d pay.
The closest we've come is something like Apple News, which allows me to pay for a selected (by them, not me) subset of features on a selected (by them, not me) subset of news sites. Can't somebody do this right?
Apple News remained fantastic until renewal of agreements when publishers demanded rights to insert additional ads.
Apple can't not have premium sources in there, so...
If websites could charge 5.99/month, they would.
If a website was charging 5.99/month, they would not stop spying on you.
Ads are a weird game. People say you're ripping off the website if you adblock, but aren't you ripping off the advertiser if you don't buy the product? If I leave YouTube music playing on a muted PC, someone is losing.
Ads won't go away. They'll just move from infesting websites to infesting AI chatbots.
Something Awful is a one time fee of ten bucks (a few bucks more to get rid of ads).
I wouldn’t really mind a one-time fee for a lot of sites if it meant that they didn’t have to do a bunch of advertising bullshit,
Still, I would be willing to pay a bit more for a website that I actually like if it's a one-time fee; I actually paid for the "Platinum" membership for Something Awful so that I would have access to search, and a custom icon, so I think the total damage was around $30.
Dunno, I guess I just feel like people will pay for things if those things don't suck. I think the fact that the only way that companies can really compete for people's time is giving it away for free [1] is a testament that most stuff on the internet is actually kind of shit.
[1] yeah I know something something you are the product something something.
ETA: I hate self-promotion but a friend of mine told me I should mention that I did write a blog post talking about this very specific example: https://blog.tombert.com/Posts/Personal/2026/02-February/Peo...
Please explain this term. Google was not useful.
But the gist of it is, companies do free to play systems that support themselves by a very small portion of their user base spending a very large amount of money. The free/low paying users find themselves with poor/no service as the companies do anything to attract more whales.
K based economies are somewhat related as you see a very small portion of the participants in an economy make a huge amount of money while everyone else gets poor.
This is highly debateable. I wouldn't mind paying a bit for the websites I am using as there are just a few platforms and some blogs that I would be happy to pay a small amount for.
Not sure on that. It was far, far better before what drives ads today. I've gotten more value from random people's static HTML pages in 1999, than I ever have from something in the last 25 years.
This just led me to think of news sites, and how they've turned mostly into click-bait farms in the last decade to 15.
Gives me pause. Didn't the king of "doing it online" buy a newspaper, but the end result wasn't an improvement on its fate? If there is any way to make cash from news, shouldn't Bezos have been able to do it??
I would pay money for that.
Such content would also suck with flashy ads too.
It's pretty easy tech I think, it's just never hit a flash point. But it could.
We literally had all of this. We had regular, affordable, high quality printed media for every hobby and interest and industry, that you could get delivered to your home address and collect in your own archive if you want, and your local library could do the same.
Those pieces of paper could not track anything about you. They tried, selling their subscriber lists, but that was the best tracking they could provide! You could easily ignore ads, and in return they had to make ads interesting enough in various ways that you might look at them anyway, or they had to make their ads directed at people who went looking for whatever you were selling.
It was an objectively better system in every way.
The Sears catalog was worlds better than Amazon. You weren't going to buy a fraudulent item for one.
Tech is a failure. It has made so much worse. It has only served to allow businesses to cut costs while extracting money from every single local community that used to allow such cash to circulate locally.
We should ban all internet advertising.
What if we limited advertising to images which don't set tracking cookies, so you would get something sort of like banner headlines. Maybe say the image had to be served from the same place as the rest of the content so you don't get to track readers with image trackers
News only made money when the newspapers could leverage their circulation numbers to run their own ads network. The classifieds section was a money machine. I remember full-page ads in the Washington Post from local car dealerships showing every model they were selling. They likely ran different ads for distribution in other regions, probably 10Xing their money. Google and Facebook killed that.
What Bezos bought was a corpse of a business, but one with strong journalistic credibility known for historic investigative analyses such as the Watergate cover-up that earned public goodwill. He was buying that goodwill and slowly asphyxiating it to align with his own interests.
There is a story of this PlentyOfFish founder (who exited to Match.com for 500m cash) that in the beginning he got 3-4 USD per click
Ads are a symptom of the problem that people want human generated content for free; they either do not value the content enough to pay for it, or cannot afford it. Ads do not solve for those problems.
No disagreement there, except the early web was not about scale. The sites you visited may have been created by someone as a hobby, a university professor outlining their courses or research, a government funded organization opening up their resources to the public, a non-profit organization providing information to the public or other professionals, or companies providing information and support for their products (in the way they rarely do today).
> people need to eat, pay for rent
Those people were either creating small sites in their spare time, or were paid to work on larger sites by their employer.
There were undoubtedly gaps in the non-commercial web. On the other hand, I'm not sure that commercializing the web filled those gaps. If anything, it is so "loud" that the web of today feels smaller and less diverse than the web of the 1990's.
How does HN exist? Wealthy benefactors. Do I appreciate it any less? I do not, I am very grateful. But solutions are needed where a wealthy benefactor has not stepped in or does not exist, a commercial business model is untenable, the government does not or will not fund it, and the scale is beyond a single person spending a few hours a week on it for free.
In the 19th century, economist William Stanley Jevons found that, as coal became more readily and easily available, demand for it went up. This was counter to the theories of others, and the principle became known as Jevons Paradox.
Jevons Paradox (a concept that is widely misunderstood, especially when it comes to tech and finance bros talking about AI) demonstrates that, a resource becomes more abundant and easily accessible, demand for that resource rises. As the web took off, people hungered more and more for digital content -- especially as internet accessibility became faster and cheaper.
To keep up -- and to pay for being able to keep up -- increasingly sophisticated monetization models were introduced.
In any case, ad models are one thing. But it's the data brokering that's even more insidious.
The irony is that if internet content were harder to access, the population on the whole wouldn't want it as much.
Now, the culmination of Jevons Paradox has spun itself around a bit in this case. We now live in a world where those profiting off of ad models and data brokering actively try to get people to demand internet content more. (Look no further than the recent social-media-addiction lawsuits.)
I do not think that this is a workable model. Firstly, because it leads inevitably to monopolization, because you don't want to pay 50,000 people for content, you want to pay 10 people for content. Secondly, because most content is bad and a waste of time and you don't find out until after you've bought it. Thirdly, and most importantly, is that there's no actual, clear separation between "news" and "advertising."
Content is generated because people who want that content generated sponsor it beforehand, and dictate the conditions under which the delivery of that content will be accepted as a fulfillment of that sponsorship. The people sponsoring that content can have any number of reasons for doing it; it can make them money directly (i.e. I have articles about cats, people who like cats subscribe to my cat website), which if you're a linear thinker you think is the only way, or it can make them money indirectly, maybe by leading consumers to particular products or political stances that they have a stake in.
This is simply the truth. Your preferences don't matter, and it's not a moral question. If you pay for content, you're more valuable to advertise to, not less. A lot of work is put into producing trash that you regret having read or watched, and was really intended to make you support Uganda's intervention in a Zambian election (or whatever.) If you "value" reading it, you've failed an intelligence test. Its value is elsewhere for the people to paid for it to be written.
What's recently shown itself to scale is small groups of people sponsoring journalists and outlets who put out tons of content for free. The motivation of those sponsors is usually to spread the points of view of the journalists they sponsor widely, because they believe them to be good.
There was never a pay model that supported things that people didn't feel passionate about or entertained by. Newspapers cost less than the paper they were written on. Television news was always a huge money loser that was invested in to raise the social status and respectability of the network. If you feel passionately about anything, you're far better off paying people to listen, to give you a chance, than to lock away content. Journalism as a luxury good can work, but only for Bloomberg terminals and Stratfor, when it is used to make other lucrative decisions by its buyers.
> orgs like Wikipedia, the Internet Archive, and others who have an endowment behind them
This is simply sponsorships by governments and billionaires. Never ever been any significant shortage of that (the patron saint of this is King Alfonso X.*) All of those people have wide interests that can often be served by paying for media to be produced or distributed. It's where we got our first public libraries from.
For me, the fact that Substack and Patreon almost work is more important, and is something that wouldn't have been as easy without the benefits that the internet brings for the collaboration of distant strangers.
-----
[*] https://en.wikipedia.org/wiki/Alfonso_X_of_Castile#Court_cul...
I'm not signing up for a subscription for that journal, but paying a small amount for access to that one article is a no brainer. I don't subscribe to a newspaper either, but I'll happily buy one.
The New European did this a decade ago using "agate" (named after the smallest font you'd get in a newspaper), top up with a few quid, then pay for each article.
Sadly didn't catch on. TNE dropped it in 2019[0]. Agate still exists, having been renamed to "axate", but consumers aren't willing to pay with anything other than their time.
[0] https://pressgazette.co.uk/news/new-european-drops-micro-pay...
> Sadly you are atypical and the vast majority are freeloaders
Citation needed.
> who even without ads or tracking will try and find another way not to pay
Why is this relevant? People try to get free stuff all over the place and I don't find it makes my life difficult.
> Citation needed.
I think we need to agree upon a definition of freeloader before citing sources to support the claim. I've found that many people who use the word have a much more transactional view of the world than I do.
No, I won't. I'll just stop using them. So will almost everyone. I don't think there's a single ad-supported product that would survive by converting to a paid subscription, because they're all so profoundly unnecessary.
Sure we had that in the print times, but we had a lot more "slow" content that you could sit with and contemplate over a day, week or month.
One of my favorite uses of AI is to ask it, "what are today's headlines?" You completely bypass all of the sensational nonsense.
We used to have "static" banners on sites, that would just loop through a predefined list on every refresh, same for every user, and it worked. Not for millions of revenue, but enough to pay for that phpbb hosting.
The advertisers started with intrusive tracking, and the sites started with putting 50 ads around a maybe paragraph of usable text. They started with the enshittification, and now they have to deal with the consequences.
There was a time when Boing Boing was a decent little print magazine. And the web site went a decade before turning into... whatever the heck it is now.
And Reality Hackers and Mondo 2000 were "guaranteed unreadable," but they were on the bleeding edge of desktop publishing style and technology.
I'm old enough to remember typing BASIC games from COMPUTE! into my C64 and reading about the latest Star Trek film in Starlog.
I sing the praises of Omni, even though it was clear they were probably snorting a lot of cocaine in their offices.
I can't be the only one who remembers Computer Shopper, but I have to admit it was years before I realized they had a bit of content and were more than just an ad sheet for Micro Center.
PC World wasn't my jam, but I respected the role it played. UnixWorld and Info World were more my thing.
And I even read the stories and articles in Playboy in the 70s. Believe it or not, they had some amazing authors publish stories there.
Hands-up... it was still pretty sexy.
This isn’t Nielsen ratings informing cable networks where to throw up which commercials in certain regions. This is far more dangerous and intense. So the conversation needs to be framed differently than the implied bar of “intrusive/annoying/incessant ads.”
I get why Chrome doesn't, and that's why you should not use it. But Netscape? Edge? What is stopping them?
Browsing the web without an ad blocker is a miserable experience. Users who have never tried or don't know how to set one up would be delighted.
And I don't think Google would lightly give up being the default search engine on the dominant mobile platform in the USA, and significantly more dominant among upper-income users.
The real reason is that the average person neither suffers with ads nor finds ads invasive, despite what a vocal online minority would have you believe. We just ignore them and get on with life. ::shrug::
The suffering isn't acute, it's death by a thousand cuts as your mind erodes into a twitchy mess. Look at the comment section of a nice youtube video and see people outraged at getting blasted with an ad at the wrong moment.
Most people don't like ads, but we love the stimulation of the screen more so we suffer them, regardless of the damage done.
Would it really? It seems to me that most normal users spend most of their time and attention on apps, not in browsers.
They need to be protected by the state because they can't think for themselves.
The problem is in most countries and especially America the state is a corrupt cesspool.
Exactly because no one in his right mind is going to work in "state". So the "state" is more like 95% "fucking idiots" as you put it, and that is self-reinforcing.
“uBlock Origin (uBO) is a CPU and memory-efficient wide-spectrum content blocker for Chromium and Firefox. It blocks ads, trackers, coin miners, popups, annoying anti-blockers, malware sites, etc., by default using EasyList, EasyPrivacy, Peter Lowe's Blocklist, Online Malicious URL Blocklist, and uBO filter lists. There are many other lists available to block even more [...]
Ads, "unintrusive" or not, are just the visible portion of the privacy-invading means entering your browser when you visit most sites. uBO's primary goal is to help users neutralize these privacy-invading methods in a way that welcomes those users who do not wish to use more technical means.”
[1] https://github.com/gorhill/uBlock?tab=readme-ov-file#ublock-...
What a silly complaint. How is an ad blocker supposed to work if it can't read and change the data on a website?
You might as well complain that your Camera app wants access to your camera.
> I currently use no extensions to keep my security posture high.
Ironically, skipping uBlock Origin because of the security concern is lessening your security posture. Are you familiar with the term "malvertising"?
Ublock is great, but I am finding fingerprinting that gets past it and that's what I'm referring to.
But turning on privacy.resistfingerprinting in about:config (or was it fingerprintingProtection?) would break things randomly (like 3D maps on google for me. maybe it's related to canvas API stuff?) and made it hard to remember why things weren't working.
Not really sure how to strike a balance of broad convenience vs effectiveness these days. Every additional hoop is more attrition.
I thought uBlock Origin was now dead in Chrome?
I remember a few hacks to keep it going but have now migrated to Firefox (or sometimes Edge…) to keep using it.
--disable-features=ExtensionManifestV2UnsupportedWhich is concerning. Until you realise I do the same thing a few days later and I'm still unique.
It is not telling you that the test site has never seen you before, because the eff isn't storing your fingerprint for later analysis and tracking
It could actually tell you about which real tracking vendors are showing you as "Seen and tracked" so it's pretty annoying they don't do that.
If that site shows you as having a unique fingerprint, I guarantee you are being tracked across the web. I've seen the actual systems in usage, not the sales pitch. I've seen how effective these tools are, and I haven't even gotten a look at what Google or Facebook have internally. Even no name vendors that don't own the internet can easily track you across any site that integrates with them.
The fingerprint is just a set of signals that tracking providers are using to follow you across the internet. It's per machine for the most part, but if you have ever purchased something on the internet, some of the providers involved will have information like your name.
Here is what Google asks ecommerce platforms to send them as part of a Fraud Prevention integration using Recaptcha:
https://docs.cloud.google.com/recaptcha/docs/reference/rest/...
If it doesn't store the fingerprints then how does it tell the difference between
5 identical looking browsers connecting from 5 different IPs
1 browser connecting 5 times from 5 different IPs
We’ve known for a long time that advertisers/“security” vendors use as many detectable characteristics as possible to constrict unique fingerprints. This seems like a major enabler of even more invasive fingerprinting and that seems like the bigger issue here.
But this is about major corporation sneakily abusing this to ilegally extract specific sensitive data which they are abusing.
If a company leaks my sensitive data, I get some nice junkmail offering me some period of time of credit monitoring or whatever so what are browsers doing to prevent this?
The issue should never be 'We want entities to have this data but only use it in some constrained and arbitrary manner that we can't even agree about it's definition.' instead 'This data shouldn't be made available to X'
The fact that the website is doing this is a bigger problem than the browser not preventing it. If someone breaks into a house, it's the burglar who is prosecuted, not the company that made the door.
If you scanned LinkedIn's private network, you'd be criminally charged. Why are they allowed to scan yours with impunity? And why is this being normalized?
The best solution is a layered defense: laws that prohibit this behavior by the website and browsers that protect you against bad actors who ignore the law.
First, I think it’s a major issue that Chrome is allowing websites to check for installed extensions.
With that said, scanning LinkedIn’s private network is not analogous to what is going on here. As problematic as it is, they’re getting information isolated to the browser itself and are not crossing the boundary to the rest of the OS much less the rest of the internal network.
Problematic for privacy? Yes. Should be locked down? Yes. But also surprisingly similar to other APIs that provide information like screen resolution, installed fonts, etc. Calling those APIs is not illegal. I’m curious to know what the technical legal ramifications are of calling these extension APIs.
This is blatant misinformation. Firefox (and all of its derivatives) also does this.
That can only happen if the extension itself leaks it to the web page and if that happens, scanning isn't necessary since it already leaked what it is to the webpage. It also doesn't tell you what extension it is, unless again, the extension leaks it to the webpage.
The attack on Chrome is far more useful for attackers as web pages can scan using the chrome store's extension ID instead.
Point being: Google will 100% give your info to the police, regardless of whether the police have the legal right to it or not, and regardless of whether you actually committed a crime or not.
Bonus points: the federal court that ruled on the case said that it likely violated the fourth amendment, but they allowed the police to admit the evidence anyway because of the "good faith" clause, which is a new one for me. Time to add it to the list of horribly abusable exceptions (qualified immunity, civil asset forfeiture, and eminent domain coming to mind).
The bad guy here is google. And the people that champion data collection by private companies because of free market == good.
1. Do a request to `chrome-extension://<extension_id>/<file>`. It's unclear to me why this is allowed.
2. Scan the DOM, look for nodes containing "chrome-extension://" within them (for instance because they link to an internal resource)
It's pretty obvious why the second one works, and that "feels alright" - if an extension modifies the DOM, then it's going to leave traces behind that the page might be able to pick up on.
The first one is super problematic to me though, as it means that even extensions that don't interact with the page at all can be detected. It's unclear to me whether an extension can protect itself against it.
Big +1 to that.
The charitable interpretation is that this behavior is simply an oversight by Google, a pretty massive one at that, which they have been slow to correct.
The less-charitable interpretation is that it has served Google's interests to maintain this (mis)feature of its browser. Likely, Google or its partners use similar to techniques to what LinkedIn/Microsoft use.
This would be in the same vein as Google Chrome replacing ManifestV2 with ManifestV3, ostensibly for performance- and security-related purposes, when it just so happens that ManifestV3 limits the ability to block ads in Chrome… the major source of revenue for Google.
The more-fully-open-source Mozilla Firefox browser seems to have had no difficulty in recognizing the issues with static extension IDs and randomizing them since forever (https://harshityadav.in/posts/Linkedins-Fingerprinting), just as Firefox continues to support ManifestV2 and more effective ad-blocking, with no issues.
uBlock Origin Lite (compatible w/ ManifestV3) works quite well for me, I do not see any ads wherever I browse.
This is better than forcing the extension to announce it's presences on every web site.
For other capabilities, like BlueTooth API, rather than querying the browser, assume that the browser can do it and then have the browser inform the user that the site is attempting to use an unsupported API.
I think Android’s ‘permissions’ early on (maybe it’s improved?) and Microsoft’s blanket ‘this program wants to do things’ authorisation pop up have set a standard here that we shouldn’t still be following.
Of course Google is going to back door their browser.
> Of course Google is going to back door their browser.
Aside from the fact that other browsers exist, this makes no sense because Google would stand to gain more by being the only entity that can surveil the user this way, vs. allowing others to collect data on the user without having to go through Google's services (and pay them).
My point isn’t that this is acceptable or that we shouldn’t push back against it. We should.
My point is that this doesn’t sound particularly surprising or unique to LinkedIn, and that the framing of the article seems a bit misleading as a result.
Your point of "I think we’d find that many websites we use are doing this" doesn't make LinkedIn's behavior ok!
By your logic, if our privacy rights are invaded which is illegal in most jurisdiction, and then it become ok because many companies do illegal things??
I’m saying that the framing of the article makes this sound like LinkedIn is the Big Bad when the reality is far worse - they’re just one in a sea of entities doing this kind of thing.
If anything, the article undersells the scale of the issue.
The list of extensions they scan for has been extracted from the code. It was all extensions related to spamming and scraping LinkedIn last time this was posted: Extensions to scrape your LinkedIn session and extract contact info for lead lists, extensions to generate AI message spam.
That seems like fair game for their business.
Not according to the website which says:
The scan doesn’t just look for LinkedIn-related tools. It identifies whether you use an Islamic content filter (PordaAI — “Blur Haram objects, real-time AI for Islamic values”), whether you’ve installed an anti-Zionist political tagger (Anti-Zionist Tag), or a tool designed for neurodivergent users (simplify). Under GDPR Article 9, processing data that reveals religious beliefs, political opinions, or health conditions requires explicit consent. LinkedIn obtains none.
It also scans for every major competitor to Microsoft’s own products — Salesforce, HubSpot, Pipedrive — building company-level intelligence on which businesses use which software. Because LinkedIn knows your name, employer, and role, each scan aggregates into a corporate technology profile assembled without anyone’s knowledge.
I think it’s kind of funny that HN has gone so reactionary at tech companies that the comments here have become twisted against the anti-spam measures instituted on a website that will never trigger on any of their PCs, because HN users aren’t installing LinkedIn scrape and spam extensions.
It's unfortunate to see folks here who don't support that – interoperability is at the heart of the Hacker Ethic. LinkedIn (along with any other big tech companies locking down and crippling their APIs) is wrong to even try to block it.
Is it an issue of the resources scrapers consume? No: Even ordinary users trying to get API access on a registered persistent account linked to their name are stymied in accessing their own data. LinkedIn simply doesn't want you to access your own data via API, or in any manner that isn't blessed by them. That ain't right.
Accessing other users' LinkedIn data via the API requires their OAuth consent, as it should be. But you are welcome to access your own data via the API.
Indeed, so I gather all of you have canceled your LI account over this?
I never made one in the first place because it was pretty clear to me that this company - even before the acquisition - had nothing good in mind.
When you're literally the company that invented Kafka for your clickstreams, "everything looks like a nail."
(More likely, though, this is an anti-scraping initiative, since headless browsers are unlikely to randomize their use of extensions, and they can use this to identify potential scrapers.)
They also logically don’t need to fingerprint these users because those people are literally logging in to an account with their credentials.
By all appearances they’re just trying to detect people who are using spam automation and scraping extensions, which honestly I’m not too upset about.
If you never install a LinkedIn scraper or post generator extension you wouldn’t hit any of the extensions in the list they check for, last time I looked.
It’s common for malware extensions to disguise themselves as something simple and useful to try to trick a large audience into installing them.
That’s why the list includes things like an “Islamic content filter” and “anti-Zionist tagger” as well as “neurodivergent” tools. They look for trending topics and repackage the scraper with a new name. Most people only install extensions but never remove them if they don’t work.
also, having a PQC enabled extension doesnt seem like a good "large user base capture" tactic.
the source code is as usual obfuscated react but that doesnt mean its malicious...
EDIT: i debuged the extension quickly and it doesnt seem to do anything malicious. it only sends https://pqc-extension.vercel.app/?hostname=[domain] request to this backend to which it has permissions. it doesnt seem to exfiltrate anything else. it might get triggered later but it has very limited permissions anyway so it doesnt seem to be a malicious extension. (but im no expert)
We had a browser extension for our product. A couple times a month someone would clone it, add some data scraping or other malware to it, and re-upload it with the same or similar name.
We set up automated searches to find them. After reporting it could take weeks to get them removed, some times longer. That’s for extensions with clear copyright problems!
The extensions may not be breaking any rules of the extension stores if they’re just scraping a website. Many of the extensions on the list are literally designed to do that as their headline feature.
If you think sending data from a page to a server would disqualify an extension from an extension store then think again. Many of the plugins listed even have semi-plausible reasons for uploading the scraped data, like the “anti-Zionist tagger” extension on the list or the ones that claim to blur things that are anti-Islam. Manufacturing a reason to send data to their servers gives them cover.
but that doesn't really matter. for the sake of the argument assume the extensions are not malicious (as evidenced e.g. by the PQC one with ?16 users?) does that change the situation?
You'll have to do better than "Probably."
What is it about the tech bubble that compels people to proactively apologize for and excuse the bad behavior of trillion-dollar companies?
I run a site which attracts a lot of unsavoury people who need to be banned from our services, and tracking them to reban them when they come back is a big part of what makes our product better than others in the industry. I do not care at all about actually tracking good users, and I am not reselling this data, or anything malicious, it's entire purpose is literally to make the website more enjoyable for the good users.
There are people who actually enjoy using LinkedIn?
It's also heavily scraped by businesses for lead generation for sales and recruiting. Either before their API became available or to not pay them or to get around the restrictions of their API.
No. Don't need extensions for that. See how Cloudflare Turnstile does it, recently popped up at https://news.ycombinator.com/item?id=47566865 cause ChatGPT uses it now:
Layer 1: Browser Fingerprint WebGL (8 properties): UNMASKED_VENDOR_WEBGL, UNMASKED_RENDERER_WEBGL, WEBGL_debug_renderer_info, getExtension, getParameter, getContext, canvas, webgl
Screen (8): colorDepth, pixelDepth, width, height, availWidth, availHeight, availLeft, availTop
Hardware (5): hardwareConcurrency, deviceMemory, maxTouchPoints, platform, vendor
Font measurement (4): fontFamily, fontSize, getBoundingClientRect, innerText. Creates a hidden div, sets a font, measures rendered text dimensions, removes the element.
DOM probing (8): createElement, appendChild, removeChild, div, style, position, visibility, ariaHidden
Storage (5): storage, quota, estimate, setItem, usage. Also writes the fingerprint to localStorage under key 6f376b6560133c2c for persistence across page loads.
Scanning for 6000 extensions is anti-competitive, surveillant and immoral.
Here is what the article says:
Method 1
async function c() {
const e = [],
t = r.map(({id: t, file: n}) => {
return fetch(`chrome-extension://${t}/${n}`)
});
(await Promise.allSettled(t)).forEach((t, n) => {
if ("fulfilled" === t.status && void 0 !== t.value) {
const t = r[n];
t && e.push(t.id);
}
});
return e;
}
Method 2 async function(e) {
const t = [];
for (const {id: n, file: i} of r) {
try {
await fetch(`chrome-extension://${n}/${i}`) && t.push(n);
} catch(e) {}
e > 0 && await new Promise(t => setTimeout(t, e));
}
return t;
}
The API is making an HTTP request to chrome-extension://${store_id}/${file_name}
There is then a second stage where they walk the DOM looking for text signatures and element attributes indicative of the store_id valuesIt looks like the user has the freedom to manage this by launching chrome with this flag: --disable-extensions
It also seems there is an extension for extension management to deny extension availability by web site: https://superuser.com/questions/1546186/enable-disable-chrom...
Exactly what I think it is. It's all for tracking and ultimately for advertisement. Linkedin can get exactly who you are and then they share that data with ad companies to better target you.
Really gross behavior.
This seems like a really weird argument to make. The fact that the platform doesn't provide a privacy-violating API is not an extenuating circumstance. LinkedIn needed to work around this limitation, so they knew they're doing something sketchy.
For the record, I don't think they're being evil here, but the explanation is different: they're don't seem to be trying to fingerprint users as much as they're trying to detect specific "evil" extensions that do things LinkedIn doesn't want them to do on linkedin.com. I guess that's their prerogative (and it's the prerogative of browsers to take that away).
If LinkedIn really wanted to profile your religious beliefs, they would presumably go after the most popular religion-related extensions, not some "real-time AI for Islamic values" thing with 6k users.
Why exactly does Chrome even allow this in the first place!? This is the most surprising takeaway for me here, given browser vendors' focus on hardening against fingerprinting.
Much better than static global IDs, but still not ideal.
Just run everything in a safe environment that it can't look out of.
Since the extensions are running on the same page as LinkedIn (some of them are explicitly modifying the LinkedIn the website) it's impossible to sandbox them so that linked in can't see evidence of them. And yes this is how a site knows you have an ad blocker is installed.
However, there are other proof of concept of another attack vector to bypass this by using timing difference when fetching those resources.
I help maintaining uBO's lists and I've seen one real world case doing this. It's a trash shortener site, and they use the `web_accessible_resources` method as one of their anti-adblock methods. Since it's a trash site, I didn't care much later.
On the contrary, your framing is quite defeatist IMO. The fact that stores get robbed frequently does not mean we should just normalize that and accept it as a fact of life.
The browser security model right now is more like those completely ineffective "gun free zone" signs cities tack up in public parks.
Then why search for PordaAI or Deen Shield? Or more specifically, since getAllExtensions() would return them, why would they be on the "scan list", instead of just ignored?
Speaking has someone who shares the same lack of surprise, perhaps some alarm is warranted. Just because it’s ubiquitous doesn’t mean it’s ok. This feels very much frog in boiling water for me.
Why do you think the alarmist framing is unwarranted?
But it’s critical to sound the correct alarm.
To me, it seems like the authors pulled the fire alarm for a single building when in reality there’s a tornado bearing down.
And by doing so, everyone is scrambling about a fire instead of the response a tornado siren would cause.
They’re both dangerous and worthy of an immediate reaction, but the confusion and misdirection this causes seems deeply problematic.
When people realize the fire wasn’t real, they start to question the validity of the alarm. The tornado is still out there.
I realize this analogy is a bit stretched.
As someone who has spent quite a lot of time steeped in security/privacy research, the stuff described in the article has been happening pervasively across the industry.
People absolutely should be alarmed. Many of us have been alarmed for quite some time. Raising the alarm by saying “LinkedIn is searching your computer” isn’t it.
How many phone apps do you think are trying to detect what else is installed on your phone? I was part of an acquisition of a company with a very large mobile user base and our new parent was shocked we weren't trying to passively collect device information like this. They for sure were.
And on the flip side, as others have done well to point out, there are a LOT of legitimate reasons to fingerprint users for anti-fraud/abuse and I am 100% convinced that we're all better off for this.
Maybe thats all this story is about, maybe not, but this article leaves out an incredible amount of complexity.
Time to figure out if I can make FireFox pretend to be Chrome, and return random browser extensions every time I visit any website to screw up browser fingerprinting...
Your computer is your private domain. Your house is your private domain. You don't make a "getAllKeysOnPorch()" API, and certainly don't make "getAllBankAccounts()" API. And if you do, you certainly don't make it available to anyone who asks.
It absolutely is sinister.
We should not normalise nor accept this behaviour in the first place.
Well great there is no avalable 'getAllFiles()' or such either because they'd be scanning your files for "fingerprinting" as well.
> alarmist framing
Well they literally searching your computer for applications/extensions that you have installed? (and to an extent you can infer what are some of the desktop applications you have based on that too)
It's important to note that this isn't fixed by ad blockers. To avoid this kind of fingerprinting, you need to disable JavaScript or use a browser like Firefox which randomizes extension UUIDs.
The people behind this URL are trying to hold Microsoft accountable. The power to them.
But I bet they could reliably guess your religious affiliation based on the presence of some specific browser extensions.
God forbid they make an educated guess based on your actual LinkedIn connections, name, interests, etc.
What's been really obnoxious lately is the number of sites I try to do things on that are straight up broken without turning off my ad-blocker.
Why is this even possible in the first place? It's nobodies business what extensions I have installed.
Yes. I was expecting LinkedIn was connecting to extensions that are using their exhanced privileges to scan your computer, per the "LinkedIn Is Illegally Searching Your Computer" headline.
Instead, LinkedIn is scanning for extensions.
I’ve come to mostly expect this behavior from most websites that run advertising code
We should be alarmed that websites we go to are fingerprinting us and tracking our behavior. This is problematic, full stop. The fact that most websites are doing this doesn't change that.
I would put it more like: it sounds bad, and it's no different from what others do, so they're all that bad.
The fact that they're working around an API limitation doesn't make this better, it just proves that they're up to no good. The whole reason there isn't an API for this is to prevent exactly this sort of enumeration.
It's clear that companies will do as much bad stuff as they can to make money. The fact that you can do this to work around extension enumeration limits should be treated as a security bug in Chrome, and fixed. And, while it doesn't really make a difference, LinkedIn should be considered to be exploiting a security vulnerability with this code.
My understanding is the rules and laws are to prevent the outcome, by any means, if it's happening.
This could be easily inferred from the depth, breadth, and interconnectedness of data in the website.
By downplaying it, it's allowing it to exist and do the very thing.
The issue here is this stuff is working likely despite ad blockers.
Fingerprinting technology can do a lot more than just what can be learned from ads.
From the site:
"The scan doesn’t just look for LinkedIn-related tools. It identifies whether you use an Islamic content filter (PordaAI — “Blur Haram objects, real-time AI for Islamic values”), whether you’ve installed an anti-Zionist political tagger (Anti-Zionist Tag), or a tool designed for neurodivergent users (simplify). Under GDPR Article 9, processing data that reveals religious beliefs, political opinions, or health conditions requires explicit consent. LinkedIn obtains none." https://browsergate.eu/extensions/
And probably also vibe-coded therefore 2 tabs of LinkedIn take up 1GB of RAM (was on the front page a few days back).
Anyway, what they're calling "spectroscopy", is a combination of extension probing and doing residue detection (looking for what extensions might leave behind in the DOM).
An ad blocker is not necessarily equipped to help since the script is embedded with the application code. Since they're targetting Chrome, switching browsers will help with the probing but not the detection part and you'll still be fingerprinted.
The only way forward is for browser vendors to offer a real privacy or incognito mode where sites are sandboxed by default. When the default profile is identical across millions of users there won't be anything unique to fingerprint.
So this is just a heads up that even if you don't have a linkedin account, they will create one on your behalf so might better check (assuming you neither have nor want one).
Are companies now commonly uploading lists of employees to LinkedIn? Is this happening automatically because you got an e-mail account from the company and the company runs on MS Office and you're identified as am employee within it? What triggered it?
This seems like somewhat of a scandal that deserves its own post, but it also needs a lot more details to be trustworthy and for people to understand what exactly is happening.
Also, was there some way for you to take ownership of the profile? Did it depend on verifying a certain e-mail address? Does it require you to get the company to remove it, or could you take ownership and then delete the LinkedIn account/profile yourself?
Again, there's no real reporting on the internet of LinkedIn creating profiles for people without their consent. If you have any documentation and details, this is the kind of thing worth posting here in full detail and/or contacting a journalist about. Of course, if it was in the past you might not have any of that info anymore.
If anyone else as any more info on the why, please share.
> The scan doesn’t just look for LinkedIn-related tools. It identifies whether you use an Islamic content filter (PordaAI — “Blur Haram objects, real-time AI for Islamic values”), whether you’ve installed an anti-Zionist political tagger (Anti-Zionist Tag), or a tool designed for neurodivergent users (simplify).
If I had to guess: I sought that automatic content blurrer, neurodivergent website simplifier, or anti-Zionist tagger actually work. They’re all just piggybacking on trending topics to get users to install them and then forget about them, then they exfiltrate the data when you visit LinkedIn.
It's no different from when you visit an Islamist or anti-Zionist website that has analytics/trackers/ads on it.
It's bad, but this "massive violation of trust" is happening everywhere and has been for decades. There's nothing that's unique to Microsoft here.
Do people really not remember scandals like Cambridge Analytica, and realise that these ads combined with social media feeds can be used to literally control and manipulate peoples decisions and behavoir?
Theres a reason Facebook and Youtube just got sued for being intentionally addictive attention machines.
Facebook was a party, but not the protagonist.
- a Cambridge researcher (Aleks Kogan) created a personality quiz FB app advertised as academic research
- users had to consent to download the app
- the app nefariously scraped users' friends' data (300k users unlocked 87 million users' data)
- the information was sold to Cambridge Analytica
- who then used the information to profile American voters
LinkedIn already has all of this information from the information you feed it. Scanning for more information provides more refined views, but LinkedIn already has your graph.
> if they do a better job at showing me an ad that might be relevant to me, how is that disgusting?
To me that signalled that the author of the comment doesnt really care what is gonig on behind the scenes if the result is a better and more relevant ad.
I see this attitude often from people who dont seem to understand the severity and seriousness of online tracking which leads to psychological profiling which leads to manipulation.
> who then used the information to profile American voters
You seem to have missed off the most serious bit at the end. Cambridge Analytica then used the data to profile millions of voters, and purposefully target divisive and flammable political material to specific suggestible people in order to manipulate outcomes.
This same thing is done all the time by all tracking and ad companies. I think this thread has gone beyond just LinkdIn scanning your browser extensions.
My point is that LinkedIn already has enough information (We've willingly given them!) to manipulate outcomes and if they're doing something nefarious, then it's already too late.
Whereas Cambridge Analytica involved bad actors (not Facebook) duping customers and re-selling their data. I don't think those elements are necessarily in play here.
You'd say that's a ridiculous and illegal thing to do without you explicit consent, right?
Maybe you personally don't mind and would be happy to offer that consent. But they're doing it without your consent, regardless of whether you want it or not.
This is not. To violate trust, there should have been some.
https://www.eeoc.gov/prohibited-employment-policiespractices
LinkedIn's scanning for browser extensions used by protected groups allows them to provide illegal services to US-based recruiters. I have no idea if they actually do it or not, and am not a lawyer, but common sense suggests there's enough here for a class action suit to move into discovery.
If you mean the _browser_, then I agree in principle, but - it is a browser offered to you by Alphabet. And they are known to mass surveillance and use of personal information for all sorts of purposes, including passing copies to the US intelligence agencies.
But of course, this is what's promoted and suggested to people and installed by default on their phones, so even if it's Google/Alphabet, they should be pressured/coerced into respecting your privacy.
[0]https://chromewebstore.google.com/detail/anti-zionist-tag/ek...
I will work on an improvement to that extension so that it can block these scans if they attempt them in firefox.
Why should a website be able to scan for extensions at all?
Or if there's a legitimate need (like linkedin.com wants to see if you installed the linkedin extension), leave it up to the extension to decide if it wants to reveal itself. The extension can register a list of URL patterns it will reveal itself to. So the linkedin extension might reveal itself only to *.linkedin.com, a language translation extension might reveal itself to everyone, and an adblocker extension might not choose to reveal itself to anyone.
extensions choose on which site they're active and if they provide any available assets (e.g. some extensions modify CSS of the website by injecting their CSS, so that asset is public and then any website where the extension is active can call fetch("chrome-extension://<extension_id>/whatever/file/needed.css" if it knows the extension ID (fixed for each extension) and the file path to such asset... if the fetch result is 404, it can assume the extension is not installed, if the result is 200 it can assume the extension is installed.
This is what LinkedIn is doing... they have their own database of extension IDs and a known working file path, and they are just calling these fetches... they have been doing it for years, I've noticed it a few years back when I was developing a chrome extension which also worked with LinkedIn, but back then it was less than 100 extensions scanned, so I just assumed they want to detect specific extensions which break their site or their terms of use... now it's apparently 6000+ extensions...
Sure, this can be solved at the legal layer, but in this case, there seems to be a much simpler and more effective technical solution, so why not pursue that instead?
I set up the cgroups hack so I could route traffic from a dev profile into a VPS vpn, and may not be that useful for everyone.
But I think this is a reminder that you may want to have at least two profiles: one public and the other private. Do you really want Microsoft to know you installed the "Otaku Neko StarBlazers Tru-Fen Extendomatic" package to change every picture of a current political figure to an image from the cast of Space Battleship Yamato?
You may be interested in Qubes OS. My daily driver. Can't recommend it enough.
PS: I guess given that recruiter accounts are paid, LinkedIn is technically selling access to the data in a way
All one has to do is just measure employees linkedin activity. I mean truthfully people don’t use the site at all if they aren’t actively looking for work. It is corporate dystopia otherwise. It is trivial to find these signals.
https://epic.org/documents/linkedin-corp-v-hiq-labs-inc/
> HiQ has created two specific data products targeted at employers: (1) “Keeper,” which informs employers which of their employees are at “risk” of being recruited by competitors; and...
My hunch is that HiQ simply looked for spikes in activity on LinkedIn as a signal for a job hunt: https://news.ycombinator.com/item?id=47566893
In any case, this lawsuit was discussed a few times on HN at the time, and IIRC there were a fair bit of support for allowing free scraping of "public information." Interesting how the sentiment here has turned these days...
The simpler explanation is that they aren't doing that.
So this probably depends on the country.
It seems to not scan for Privacy Badger and uBlock Origin, two extensions I rely on. That's...surprising.
one of the culprits is https://li.protechts.net taking 2GB ram and 8% cpu.
DDG searches say this is something for linkedin. - I had two tabs for linkedin open but left behind as I opened other tabs to research.
So I had not reopened these tabs in over 9 hours and they are still just humming along sucking down almost 10% of cpu and a couple gigs of ram for what?
This is firefox with ublock origin - quick searches saw malwarebytes browser guard considered it (protechts.net) malware for a bit and then took it off the list of things it blocked / warned about.
Not sure this is related to the scan mentioned, but it may be related to the overall concerns about data and unknown usage of resources.
I'm considering blocking this at the dns hosts level at this point.
I am a little surprised something like CORS doesn't apply to it, though.
This is fair from Linkedin IMO as I've seen loads of different extensions actually scraping the linkedin session tokens or content on linkedin.
It's not clear though, either they only tested against chrome-based browsers or Firefox isn't enabling them to do so.
edit: I answered before I go fully through the article but it does say it's only Chrome based.
> The extension scan runs only in Chrome-based browsers. The isUserAgentChrome() function checks for “Chrome” in the user agent string. The isBrowser() function excludes server-side rendering environments. If either check fails, the scan does not execute.
> This means every user visiting LinkedIn with Chrome, Edge, Brave, Opera, Arc, or any other Chromium-based browser is subject to the scan.
I feel like this is obvious and you know that this is the exact mistake being made, but rather than drop an actual correction, you take the insufferable approach of pretending you don't know what's happening and forming the correction as a question.
This seems to be a case where the poison seeps through the cracks. From Google and Chrome to other Chromium-based browsers. In very correct ways, in this case, they are Chrome based.
function a() { return "undefined" != typeof window && window && "node" !== window.appEnvironment; }
function s() { return window?.navigator?.userAgent?.indexOf("Chrome") > -1; }
if (!a() || !s()) return;
I'm happy to see that this doesn't hit firefox. I wonder if safari is impacted.
Is that enough blocking, I wonder?
The code filters out non-chrome browsers: >The extension scan runs only in Chrome-based browsers. The isUserAgentChrome() function checks for “Chrome” in the user agent string. The isBrowser() function excludes server-side rendering environments. If either check fails, the scan does not execute.
> Microsoft has 33,000 employees and a $15 billion legal budget
Microsoft has more than 220k employees (it's hard to follow with all the layoffs), and the G&A in which bankrolls legal expenses (but not only - it also contains basically every employee who's not engineering or sales) was only 7B in 2025 - so legal budget is much lower than that.
> Every time any of LinkedIn’s one billion users visits linkedin.com, hidden code searches their computer for installed software, collects the results, and transmits them to LinkedIn’s servers
And thought, "no way in hell this gets by Safari."
And then, under "The Attack: How it Works":
> Every time you open LinkedIn in a Chrome-based browser
Shocker. If you use a Chromium-based browser, you should expect to be trading away your privacy, IME.
Here's a quick look at only the static things a website can fingerprint https://www.browserscan.net/.
> 'the term “exceeds authorized access” means to access a computer with authorization and to use such access to obtain or alter information in the computer that the accesser is not entitled so to obtain or alter;'
The problem, of course, is that by clicking on a LinkedIn link, you agree to a non-negotiated contract that can change at any time, and that you have never seen. If that weren't allowed, then this sort of crap would correctly be considered "unauthorized access":
Considering the goal is to identify people, this is undeniably PII. As the article demonstrates, it also pertains sensitive information.
⇒ which Chrome allows sites to do.
Essentially, they are labelling you, like most do, but against some interesting profiles given the kinds of extensions they are scanning for
It does suggest that’s what they’re collecting. That is per se a violation in many jurisdictions. It should trigger investigations in most others to ensure it wasn’t mis-used.
I wasn’t contesting that they query extensions that can be used for that purpose, or that they use query results for that purpose, but indicated that the fact that they make such queries doesn’t necessarily imply that they try to do such profiling.
>Political opinions
>LinkedIn scans for Anti-woke (“The anti-wokeness extension. Shows warnings about woke companies”), Anti-Zionist Tag (“Adds a tag to the LinkedIn profiles of Anti-Zionists”), Vote With Your Money (“showing political contributions from executives and employees”), No more Musk (“Hides digital noise related to Elon Musk,” 19 users), Political Circus (“Politician to Clown AI Filter,” 7 users), LinkedIn Political Content Blocker, and NoPolitiLinked.
>Each of these extensions reveals a political position. If LinkedIn detects any of them, it has collected data revealing that person’s political opinions. Article 9 prohibits this.
I ask because it seems like every job I apply to asks for a linkedin profile, and I've heard floating around that if it's not filled in enough most employers assume you're a bot. Heck, one of the forms from the "who's hiring" thread yesterday straight up said if you have < 100 connections they'd throw out your application. So, in order to get my foot in the door, I need to hand over vast and intricate data about my personal life to a third party?
For the broader issue of not wanting to give even the information you'd need to choose to share to LinkedIn? Network the good ol' fashioned way: talking to random strangers in San Francisco bars.
Uh what.
Everyone from the suit that made the ultimate calls down to the lowest code monkey who bugfixed such features are responsible for their choice to target the good, common user of the internet. I'm not asking for altruism, I just think people shouldn't choose to do evil, and that those who do anyway should be recognized as such.
Second not having a ton of extensions. Extensions can do fishy things.
This is Chrome’s broken model. Before installing an extension, one should be able to see all the domains an extension talks to.
The domains should be listed in manifest. But that’s not how it works.
In Android, every app you open needs a gazillion default permissions.
There's a reason I continue to use Firefox (with uBlock Origin) and will never switch.
Also, when I got laid off from a previous job, I made a LinkedIn profile to help find a new job. Once I found a new job, I haven't logged into LinkedIn since - that was almost 2 years ago.
https://git.gay/SiteRelEnby/browsergate-list
https://git.gay/SiteRelEnby/browsergate-list/src/branch/main...
* Anti-Zionist Tag (directly inferring political opinion)
* PordaAI (Islamic content filter)
* simplify (browsergate.eu specifically called out as a neurodivergent accessibility tool. Job search autofill that markets itself as particularly useful for people who struggle with forms)
* No more Musk ("Hides digital noise related to Elon Musk")
* Political Circus ("Politician -> Clown AI Filter")
* Job application trackers and utils ("Job Follow-Up Tracker" etc)
* Various "Distraction Blocker" type addons
LinkedIn scanning for tools that scrape LinkedIn:
* LinkedIn Cookie Sync for Headhunting Agent
* LinkedIn Cookie importer for Derrick (lol "for Derrick")
* MailMatics Cookie Grabber
* LinkedIn Fake Job Post Detector. Yes, they're detecting an addon that exposes fake job postings on their own platform.
*NOT* in the list, if you were wondering:
* Shinigami Eyes
* Dark Reader
* Adblockers
* Password managers
* FoxyProxy
* User-Agent spoofers, request modification tools, etc
* Most privacy/security tools (no uBO, no Privacy Badger, no FoxyProxy, no NoScript, etc.
For the latter category, the most interesting things there we found *were* searched-for are BuiltWith Technology Profiler, and some browser addons bundled from scanners (e.g. "Malwarebytes Browser Guard Beta").
A lot of Zionists claim -- incorrectly -- that all Jews as Zionists. But certainly the major groups of Zionists are Christian zionists and Jewish Zionists. I would say there is a very very high chance that if you use the Anti-zionist Tag Chrome extension, that you are Jewish.
So it seems quite likely that Linkedin is actually tracking Jews with this.
I'm not convinced by their page explaining "Why it's illegal and potentially criminal" [0]. It's written by security researchers and non-attorneys.
For example, this characterization seems overly broad:
> The Court of Justice of the European Union has ruled, in three separate cases, that data which allows someone to infer or deduce protected characteristics is covered by this prohibition, regardless of whether the company intended to collect sensitive data.
How much is that currently? $600M?
I hope browsers in the future will need to ask for permission before doing any of that.
Its nothing to do with the specific house you live in, and everything to do with the activity being grouped together with all other activity you have done, which they know from fingerprinting and IP addresses.
They dont need to know where you live to have a very accurate personal and psychological profile opn you, and switching browsers is not going to help that in the slightest Im afraid.
Realistically you're probably exposed and identified. But if you're meticulous and careful, you might not be, or at least not as completely as someone who is unaware or not careful. But it's not at all the same as if, say, a state actor was motivated to spy on you specifically.
2020 - LinkedIn Sued For Spying on Clipboard Data After iOS 14 Exposes Its App:
https://wccftech.com/linkedin-sued-for-spying-on-clipboard-d...
2013 - LinkedIn MITM attacks your iPhone to read your mail:
https://www.troyhunt.com/disassembling-privacy-implications-...
2012/2016 - Data breach of 164.6 million accounts:
https://haveibeenpwned.com/breach/LinkedIn
According to haveibeenpwned.com, my email & password were leaked in both the 'May 2012' and 'April 2021' LinkedIn incidents.
LinkedIn is getting nothing.
No it isn't. Performing fingerprinting on user's devices, to ultimately profit of financially or worse is misleading. Especially doing this while knowing the user isn't aware what this really means and just deciding it for them.
The headline is just an exaggerated way of saying what is really happening.
That seems like the most obvious use case? Or maybe I missed something in the write up.
Firefox with a non-default profile can be created like that:
./firefox -CreateProfile "profile-name /home/user/.mozilla/firefox/profile-dir/"
# For linkedin that would be:
./firefox -CreateProfile "linkedin /home/user/.mozilla/firefox/linkedin/"
And you can launch it like that: ./firefox -profile "/home/user/.mozilla/firefox/profile-dir/"
# For linkedin that would be:
./firefox -profile "/home/user/.mozilla/firefox/linkedin/"
So, given that /usr/bin/firefox is just a shell script, you can - create a copy of it, say, /usr/bin/firefox-linkedin
- adjust the relevant line, adding the -profile argument
If you use an icon to run firefox (say, /usr/share/applications/firefox.desktop), you'll need to do copy/adjust line for the icon.Of course, "./firefox" from examples above should be replaced with the actual path to executable. For default installation of Firefox the path would be in /usr/bin/firefox script.
So, you can have a separate profiles for something sensitive/invasive (linkedin, shops, etc.) and then you can have a separate profile for everything else.
And each profile can have its own set of extensions.
> Microsoft has 33,000 employees
this should probably be LinkedIn, not Microsoft.
I really don't think they're "illegally" searching your computer, they're checking for sloppy extensions that let linkedin know they're there because of bad design.
This feels very similar, except now it's taking a swing at Microsoft. It's apparently paid for by some mysterious "trade association and advocacy group for commercial LinkedIn users" that runs out of a private PO box in a small German town - uh huh. I'm not going to feel bad for Microsoft, but I would love to read some investigative reporting down the line.
https://www.linkedin.com/pulse/how-linkedin-knows-which-chro...
As an end user I could not find an option to open the side panel
With that said, the chrome web store ecosystem has bigger problems infront of them. For example, loads of extensions outright just send every URL you visit (inc query params) over to their servers. Things like this just shouldn't happen, imagine you installed an extension from a few years back and you forgot about it, that's what happened to me with WhatRuns, which also scraped my AI chats.
I'm working on a tool to let people scan their extensions (https://amibeingpwned.com/) and I've found some utterly outrageous vulnerabilities, widespread affiliate fraud and widespread tracking.
It's either the extension's choice to become detectable ("externally_connectable" is off by default) or it makes unique changes to websites that allow for its detection.
All of these are opt-in by the extensions and MV3 actually force you to specify which domains can access your extension. So, again, each extension must explicitly allow the web to find it.
And not letting you read your messages when on your mobile phone unless you use their app is particularly mean. Considering again where they are sending all the information they scrape.
use safari or Firefox. and chrome only for incognito web app testing.
I know there has been other LinkedIn hate on HN this week. I know they have some good tools for job searching and hiring. I still wish we as a society could move on and leave this one with MySpace.
My guess, Linkedin is used for years as source of valuable information for phishing/spear-phishing.
Maybe their motive is really spying. But more important for them is to fight against people botting Linkedin.
Imho, browser fingerprinting should be banned and EU should require browser companies to actively fight against it, not to help them (Fu Google)
> Every time any of LinkedIn’s one billion users visits linkedin.com, hidden code searches their computer for installed software
and then proceeds not to explain how it’s doing that to me, a Safari user.
Because, spoiler: it isn’t. Or, it might try to search, and fail, and nothing will be collected.
Literally 2 days ago, I submitted a post: LinkedIn "final decision", restricting my account and making me feel unheard[0] explaining all of some of the worst customer support I have seen
I wish to give a TLDR, but essentially Linkedin will simply reject your account or give you immense headache if your id's aren't being detected by persona (persona is a really shady-company in it of itself with really not the best security practices) I actually lost count of how many times their customer support just responded with a bland message and just didn't even read my message
This is why, being frustrated out of all of this, I actually sent a linkedin customer support message that I don't feel heard, I want to be heard by human, so if you are a human especially when they were asking ME to go to a public notary to sign an affadavite to get a 1 day old restricted Linkedin account (oh btw, its also illegal for a minor to sign an public notary in my country the way they mentioned and I mentioned it about as many times as I could and that I am willing to share my ID like Aadhaar to them but they genuinely don't hear your messages)
Honestly, my experience just says that there is no human customer support in Linkedin, its really a customer support nightmare worse than even some of the telecom horror stories. Perhaps I should contact browsergate.eu if my incident within my country can also be a case of legality or not, essentially I was cooperating with them to give any document that I can reasonably provide but linkedin forms and everything redirect to 404 as well.You can read my experience in depths but my experience really shows me as to Linkedin customer support being so unhelpful that you question how a company can be so bad. I wish for more ethical alternatives to Linkedin and its nightmare to appear within this space.
(I also had a minor idea of asking Linkedin support to see if they read my messages and literally as I told them that I feel unheard, I would like it if they can make me feel heard and that they are reading my messages so If they are actually reading my message, then respond to me with value of 351/13 and I asked the person who joined Linkedin as to why they joined Linkedin, essentially just one line would suffice to know if I am talking to human or not, they did not respond to any of this and essentially, as far as I can tell, pasted another pre-generated response not hearing me)
[0]: https://news.ycombinator.com/item?id=47586760 (https://smileplease.mataroa.blog/blog/linkedin/)
This reminds me of the slop bug reports plaguing the curl project.
Different browsers have various settings available, but do we have a little snitch for a web browser?
I am not a lawyer, but site stability seems like a GDPR "Legitimate Interest" in my book anyway.
HOLD EXECS LEGALLY ACCOUNTABLE, CRIMINALLY AND CIVILLY, FOR THE CRIMES OF THER CORPORATIONS.
OMG is literally every article written with LLMs these days I just can't anymore. It's all so tiring.
Would you like me to suggest some AI summarizer tools you could use to more efficiently read AI generated content in the meantime?
Yes. Resistance puts the possibility of hugs on the stool, so to speak.
I get it... I'm not a good writer. It just sucks that now people are going to assume the stuff I said isn't even me.
I guess I always scored pretty low on the Turing test and never even knew it.
The language is natural. Normal. Human. Who could question its authenticity?
The original example isn't the worst offender, but even small offenders stick out when you can't escape seeing this kind of thing everywhere.
I am exhausted by so many people calling writing out as AI without sufficient proof other than writing style. Some things are more obvious, sure... maybe I'm just too stupid to see a lot of the rest of it? But so much of what gets called out seems incredibly familiar to me compared with traditional print media I've been reading my entire life.
I'm starting to wonder if a lot of people just have poor literacy skills and are knee-jerk labeling anything that looks well written as AI.
I don't think I've personally seen a single false positive on HN. If anything, too much slop goes through uncontested.
It's actually insane opening up /r/webdev and similar subreddits and seeing dozens of AI authored posts with 50+ comments and maybe a single person calling it out. Makes me feel crazy. It's not as much of a problem here, but there is absolutely a writing style that suddenly 50% of submissions are using. It's always to promote something and watching people fall for it over and over again is upsetting.
I find myself doing this a lot, and I’m sure even more slips without my notice.
What matters is the content!
What's next? "There's punctuation in the sentence, must be AI" ?
These aren't good people, but if you make the fine to the organisation much more expensive than the expected return, lock up the whole board and leave their families without a pot to piss in we will see this become the exception instead of the norm.