Full version here: https://eprint.iacr.org/2025/794.pdf
We didn't review the entire source code, only the cryptographic core. That said, the main issue we found was that the WhatsApp servers ultimately decide who is and isn't in a particular chat. Dan Goodin wrote about it here: https://arstechnica.com/security/2025/05/whatsapp-provides-n...
https://cybersecuritynews.com/track-android-users-covertly/
An audit of 'they can't read it cryptographically' but the app can read it, and the app sends data in all directions. Push notifications can be used to read messages.
Are you trying to imply that WhatsApp is bypassing e2e messaging through Push notifications?
Unless something has changed, this table highlights that both Signal and WhatsApp are using a "Push-to-Sync" technique to notify about new messages.
https://crysp.petsymposium.org/popets/2024/popets-2024-0151....
It is reproducibly loaded in each chat, and an MitM firewall can also confirm that. I don't know why the focus of audits like these are always on a specific part of the app or only about the cryptography parts, and not the overall behavior of what is leaked and transferred over the wire, and not about potential side channel or bypass attacks.
Transport encryption is useless if the client copies the plaintext of the messages afterwards to another server, or say an online service for translation, you know.
thank you for your work.
I’ve been looking for this everywhere the past few days but I couldn’t find any official information relating the use of https://signal.org/docs/specifications/pqxdh/ in the signal protocol version that WhatsApp is currently using.
Do you have any information if the protocol version they currently use provides post-quantum forward secrecy and SPQR or are the current e2ee chats vulnerable to harvest now, decrypt later attacks?
Thanks for your time.
Private keys, probably not. WhatsApp is E2EE meaning your device generates the private key with OS's CSPRNG. (Like I also said above), exfiltration of signing keys might allow MITM but that's still possible to detect e.g. if you RE the client and spot the code that does it.
Then it's not fully investigated. That should put any assessments to rest.
Because if you had, you would realize how ridiculous it is to state that app security can't be assessed until you have read 100% of the code
That's like saying "well, we don't know how many other houses in the city might be on fire, so we should let this one burn until we know for sure"
This must mean that you have been paid not to understand these things. Or perhaps you would be punished at work if you internalized reality and spoke up. In either case, I don't think your personal emotional landscape should take precedence over things that have been proven and are trivial to demonstrate.
I'd much rather not have blind faith on WhatsApp doing the right thing, and instead just use Signal so I can verify myself it's key management is doing only what it should.
Speculating over the correctness of E2EE implementation isn't productive, considering the metadata leak we know Meta takes full advantage of, is enough reason to stick proper platforms like Signal.
Not quite true with Signal's double ratchet though, right? Because keys are routinely getting rolled, you have to continuously exfiltrate the new keys.
Last time I checked, by default, WhatsApp features no fingerprint change warnings by default, so users will not even notice if you MITM them. The attack I described is for situations where the two users would enable non-blocking key change warnings and try to compare the fingerprints.
Not saying this attack happens by any means. Just that this is theoretically possible, and leaves the smallest trail. Which is why it helps that you can verify on Signal it's not exfiltrating your identity keys.
This DCL could be fetching some forward_to_NSA() function from a server and registering it to be called on every outgoing message. It would be trivial to hide in tcpdumps, best approach would be tracing with Frida and looking at syscalls to attempt to isolate what is actually being loaded, but it is also trivial for apps to detect they are being debugged and conditionally avoid loading the incriminating code in this instance. This code would only run in environments where the interested parties are sure there is no chance of detection, which is enough of the endpoints that even if you personally can set off the anti-tracing conditions without falling foul of whatever attestation Meta likely have going on, everyone you text will be participating unknowingly in the dragnet anyway.
https://developer.android.com/privacy-and-security/risks/dyn...
I wonder if that would deter Meta.
Yeah I'd imagine it would have been found by know. Then again, who knows when they'd add it, and if some future update removes it. Google isn't scanning every line for every version. I prefer to eliminate this kind of 5D-guesswork categorically, and just use FOSS messaging apps.
So the point other commenters are making is that you can verify all you want that the encryption is robust and secure, but that doesn't mean the app can't just send a copy of the info to a server somewhere after it has been decoded.
Detecting backdoors is only truly feasible with open source software and even then it can difficult.
A backdoor can be a subtle remote code execution "vulnerability" that can only be exploited by the server. If used carefully and it exfiltrates data in expected client-server communications it can be all but impossible to detect. This approach also makes it more likely that almost no insider will even be aware of it, it could be a small patch applied during the build process or to the binary itself (for example, a bound check branch). This is also another reason why reproducible builds are a good idea for open source software.
This is absurd. Detecting backdoors is only truly feasible on binaries, there's no way you can understand compiler behavior well enough to be able to spot hidden backdoors in source code.
The claim Stallman would make (after punishing you for using Open Source instead of Free Software for an hour) is that Closed Software (Proprietary Software) is unjust. but in the context of security, the claim would be limited to Free Software being capable of being secure too.
You may be able to argue that Open Source reduces risk in threat models where the manufacturer is the attacker, but in any other threat model, security is an advantage of closed source. It's automatic obfuscation.
There's a lot of advantages to Free Software, you don't need to make up some.
It was a pretty much settled argument 10 years ago, even before the era of LLVM lifters, but post-LLM the standard of care practice is often full recompilation and execution.
I think there's a lot of historical evidence that doesn't support this position. For instance, Internet Explorer was generally agreed by all to be a much weaker product from a security perspective than its open source competitors (Gecko, WebKit, etc).
Nobody was defending IE from a security perspective because it was closed source.
The difference between obscurity and a secret (password, key, etc) is the difference between less then a year to figure it out and a year or more to figure it out.
There is a surprising amount of software out there with obscurity preventing some kind of "abuse" and in my experience these features are not that strong, but it takes someone like me hours to reverse engineer these things, and in many cases I am the first person to do that after years of nobody else bothering.
I love the Rob Joyce quote that explained why TAO was so successful: "In many cases we know networks better than the people who designed and run them."
Is an unbreakable security mechanism
with
Improves security
anything that complicates an attacker improves security, at least grossly. That said, then there might be counter effects that make it a net loss or net neutral.
It could be interleaved in any of the many analytics tools in there too.
You have to trust the client in E2E encryption. There's literally no way around that. You need to trust the client's OS (and in some cases, other processes) too.
Vastly easier than spotting a clever bugdoor in the source code of said app.
Not having the source code hasn't stopped people from finding exploits in Windows (or even hardware attacks like Spectre or Meltdown). Having source code didn't protect against Heartbleed or log4j
I'd conclude it comes down to security culture (look how things changed after the Trustworthy Computing initiative, or OpenSSL vs LibreSSL) and "how many people are looking" -- in that sense, maybe "many eyes [do] make bugs shallow" but it doesn't seem like "source code availability" is the deciding factor. Rather, "what are the incentives" -- both on the internal development side and the external attacker side
But it's still possible. And analyzing source code is still hard.
Does this rewording reflect it's meaning?
"You don't actually need code to evaluate security, you can analyze a binary just as well."
Because that doesn't sound correct?
But that's just my first pass, at a high level. Don't wanna overinterpret until I'm on surer ground about what the dispute is. (i.e. don't want to mind read :) )
Steelman for my current understanding is limited to "you can check if it writes files/accesses network, and if it doesn't, then by definition the chats are private and its secure", which sounds facile. (presumably something is being written to somewhere for the whole chat thing to work, can't do P2P because someone's app might not be open when you send)
Whether the original comment knows it or not, Stallman greatly influenced the very definition of Source Code, and the claim being made here is very close to Stallman's freedom to study.
>"You don't actually need code to evaluate security, you can analyze a binary"
Correct
>"just as well"
No, of course analyzing source code is easier and analyzing binaries is harder. But it's still possible (feasible is the word used by the original comment)
>Steelman for my current understanding is limited to "you can check if it writes files/accesses network, and if it doesn't, then by definition the chats are private and its secure",
I didn't say anything about that? I mean those are valid tactics as part of a wider toolset, but I specifically said binaries, because it maps one to one with the source code. If you can find something in the source code, you can find it in the binary and viceversa. Analyzing file accesses and networks, or runtime analysis of any kind, is going to mostly be orthogonal to source code/binary static analysis, the only difference being whether you have a debug map to source code or to the machine code.
This is a very central conflict of Free Software, what I want to make clear is that Free Software refuses to study closed source software, not because it is impossible, but because it is unjustly hard. Free Software never claims it is impossible to study closed source software, it claims that source code access is a right, and they prefer rejecting to use closed source software, and thus never need to perform binary analysis.
The way they analyze binaries now is by using textual interfaces of command tools, and the tools used are mostly the ones supported by Foundation Models at training time, mostly you can't teach it new tools at inference, they must be supported at training. So most providers are focused on the same tools and benchmarking against them, and binary analysis is not in the zeitgeist right now, it's about production more than understanding.
Also MCP is not an Agent protocol, it's used in a different category. MCP is used when the user has a chatbot, sends a message, gets a response. Here we are talking about the category of products we would describe as Code Agents, including Claude Code, ChatGPT Codex, and the specific models that are trained for use in such contexts.
The idea is that of course you can tell it about certain tools in inference, but in code production tasks the LLM is trained to use string based tools such as grep, and not language specific tools like Go To Definition.
My source on this is Dax who is developing an Open Source clone of Claude Code called OpenCode
You say they’re better with the tools they’re trained on. Maybe? But if so not much. And maybe not. Because custom tools are passed as part of the prompt and prompts go a long way to override training.
LLMs reason in text. (Except for the ones that reason in latent space.) But they can work with data in any file format as long as they’re given tools to do so.
Like, it's assuredly harder for an agent than having access to the code, if only because there's a theoratical opportunity to misunderstand the decompile.
Alternatively, it's assuredly easier for an agent because given execution time approaches infinity, they can try all possible interpretations.
From business standpoint they don’t have to read these messages, since WhatsApp business API provide the necessary funding for the org as a whole.
Thanks.
Whatsapp uses key transparency. Anyone can check what the current published keys for a user are, and be sure they get the same value as any other user. Specifically, your wa client checks that these keys are the right key.
Whatsapp has a blog post with more details available.
None of this makes the point you want to make. Being a former engineer. The team making "so much effort". You "knowing for sure". Like many in security, a single hole is all it takes for your privacy to pour out of your metaphorical bag of sand.
Besides I get the feeling we're so cooked these days from marketing that when I get freaked out that an advert is what I was thinking about. It's probably because they made me think about it.
Or maybe I need to update my meds?
Another comment above mentions that you can recover conversation histories with just your phone number--if that's true then yup. The E2EE is all smoke and mirrors.
Nobody would ever create a SendPlainTextToZuck() function that had to be called on every message.
It would be as simple as using a built in PRNG for client side key generation and then surreptitiously leaking the initial state (dozens of bytes) once in a nonce signing or something when authenticating with the server.
Here it might be: This analytics package is dynamically loaded at runtime because reasons. This abuse flagging and review system is bundled with analytics because reasons. This add on for reconfiguring how the analytics package behaves at runtime, and has a bunch of switches nobody remembers why they’re here but don’t touch them they’re fragile.
> There’s a lawsuit against WhatsApp making the rounds today, claiming that Meta has access to plaintext. I see nothing in there that’s compelling; the whole thing sounds like a fishing expedition.
https://bsky.app/profile/matthewdgreen.bsky.social/post/3mdg...
At this point, I won’t trust anything short of this on the front page of an SEC filing, signed by zuck and the relevant management chain:
“The following statement is material to earnings: Facebook has never (since E2EE was rolled out) and will never access messages sent through whatsapp via any means including the encryption protocol, application backdoor moderation access or backup mechanisms. Similarly, it does not provide third parties with access to the methods, and does not have the technical capability to do so under any circumstances.”
1. as others have said, they could be collecting the encrypted messages and then tried to decrypt them using quantum computing, the Chinese have been reportedly trying to do this for many years now.
2. with metadata and all the information from other sources, they could infer what the conversation is about without the need to decrypt it: if I visit a page (Facebook cookies, they know), then I share a message to my friend John, and then John visits the same page (again, cookies), then they can be pretty certain that the contain of the message was me sharing the link.
I no longer work at Meta, but in my mind a more likely scenario than (1) is: a senior engineer proposes a 'Decryption at Scale' framework solely to secure their E6 promo, and writes a 40-page Google Doc to farm 'direction' points for PSC. Five PMs repost this on Workplace to celebrate the "alignment" so they can also include it in their PSCs.
The TL and PMs immediately abandon the project after ratings are locked because they already farmed the credit for it. The actual implementation gets assigned to an E4 bootcamp grad who is told by a non-technical EM to pivot 3 months in because it doesn't look like 'measurable impact' in a perf packet. The E4 gets fired to fill the layoff quota and everyone else sails off into the sunset.
can analyze receivers data or receivers contact trees data which is easier to access.
The number of free or paid data sources is daunting.
If you never give WhatsApp a cryptographic identity then what key is it using? How are your messages seamlessly showing up on another device when you authenticate? It’s not magic, and these convenience features always weaken the crypto in some way.
WhatsApp has a feature to verify the fingerprint of another party. How many people do you think use this feature, versus how many people just assume they're safe because they read that WhatsApp has E2EE?
But you never know.
The reality is that most users do not seem to care. For many, WhatsApp is simply “free SMS,” tied to a phone number, so it feels familiar and easy to understand, and the broader implications are ignored.
The government is pretty harsh when they find out you lied under oath. Corporate officers do not lie to the government frequently.
Not two months ago I sent a single photo to a friend of some random MacGyver kitchen contraption I made. Never described it, just a photo with the lol. He replied lol. He never reshared nor discussed it with anyone else. We never spoke about this before or after. Two days later he starts seeing ads on Facebook for a proper version of the same. There's virtually no other explanation except for Meta vacuuming and analyzing the photo. None.
But I don't think WhatsApp takes many steps to protect media and in many cases the user really wants to backup media or share in other apps, etc, over security for media.
Compromise of the client side application or OS shouldn't break the security model.
This should be possible with current API's, since each message could if needed simply be a single frame DRM'ed video if no better approach exists (or until a better approach is built).
I don't really see how it's possible to mitigate client compromise. You can decrypt stuff on a secure enclave but at some point the client has to pull it out and render it.
Easy: pass laws requiring chat providers to implement interoperability standards so that users can bring their own trusted clients. You're still at risk if your recipient is using a compromised client, but that's a problem that you have the power to solve, and it's much easier to convince someone to switch a secure client if they don't have to worry about losing their contacts.
In Europe that's called the Digital Markets Act.
But in a way, I feel like sometimes it makes sense to not completely open everything. Say a messaging app, it makes sense to not just make it free for all. As a company, if I let you interoperate with my servers that I pay and maintain, I guess it makes sense that I may want to check who you are before. I think?
- Facebook can still control the identity, but there needs to be a legal recourse for getting banned, and their policies can't discriminate against viewpoints, for example
- The client specs should be open so that an alternate client can be implemented (sort of like how Telegram is currently)
Agreed.
> The client specs should be open so that an alternate client can be implemented
An example that comes to mind is Signal, where they don't want that. They get a lot of criticism for it of course, but I think it the reasoning actually makes sense: in terms of security, allowing third-party clients is a security risk. If your threat model is "people who risk their life using it", it makes sense, right?
Under the EU's Digital Markets Act, WhatsApp is considered a gatekeeper (Signal is not) and has to be open to interoperability. It seems like they do audit the implementations in order to make sure that the security is not too bad. Which makes sense again, but has a cost. For Meta, that's fine. For Signal... I don't know.
Also WhatsApp will - if I understand correctly - make it very clear that you are talking to someone on a third-party client (and again they get a lot of criticism for that). But I think it makes sense... If WhatsApp was so open that every second client was pretty much a spyware, that would defeat the purpose of E2EE messaging.
Not that I strongly disagree, but just saying that it seems... complicated.
E2EE is about secure transport between the endpoints. What happens to the message after the endpoint is not something an app can feasibly enforce. Having control of the clients can at most do things like enforcing deletes, which IMO is not a good idea anyway.
> every second client was pretty much a spyware
Very few people will actually use one since the official app won't be outwardly too hostile, and those who do should be sufficiently discerning.
Methinks you put far too much faith in the government, at least from my understanding of the history of cybersecurity :)
You could of course offload plaintext input and output along with cryptographic operations and key management to separate devices that interact with the networked device unidirectionally over hardware data diodes, that prevent malware from getting in or getting the keys out.
Throw in some v3 Onion Services for p2p ciphertext routing, and decent ciphersuite and you've successfully made it to at least three watch lists just by reading this. Anyway, here's one I made earlier https://github.com/maqp/tfc
Think of the way DRM'ed video is played. If the media player application is compromised, the video data is still secure. Thats because the GPU does both the decryption and rendering, and will not let the application read it back.
You could put the entire app within TrustZone, but then you're not trusting the app vendor any less than you were before.
You don't build defense-in-depth by assuming something can't be compromised.
This was 2025. I'm excited for what 2026 will bring. Things are moving fast indeed.
If you are sophisticated enough to understand, and want, these things (and I believe that you are) …
… then why would you want to use WhatsApp in the first place?
And the network effect of whatsapp (3 billion users) seems currently the best route to that.
In the universe where they are the same entity (walled-gardens) there is only the middleman.
In such cases you either trust them or you don’t, anything more is not required because they can compromise their own endpoints in a way you can not detect.
Quick, someone set up a Kalshi or Polymarket or whatever claiming that WhatsApp isn't E2EE.
I'll gladly bet against the total volume of people that believe it isn't E2EE -- it'll be an easy 2x for you or me.
The client is open source. It's trivial to verify this is 100% factually happening. They have access to every group message. Every desktop message. Every message by default. If you enable secret chats for 1:1 mobile chats, you are now disclosing to Telegram you're actively trying to hide something from them, and if there ever was metadata worth it for Keith Alexander to kill someone over, it's that.
>they seem less cooperative and I never got the notion that they ever read private messages until the Macron incident
We have no way to verify Telegram isn't a Russian OP. I'd love to say Pavel Durov fled for his life into exile https://www.nytimes.com/2014/12/03/technology/once-celebrate...
But the "fugitive" has since visited Russia over SIXTY times https://kyivindependent.com/kremlingram-investigation-durov/
Thus, I wouldn't be as much concerned about what they're handing EUROPOL, but what they're handing FSB/SVR.
Even if Telegram never co-operated with Russian intelligence, who here thinks Telegram team, that can't pull off the basic thing of "make everything E2EE" that ~all of its competition has successfully done, can harden their servers against Russian state sponsored hackers like Fancy Bear, who obviously would never make noise about successful breach and data exfiltration.
>How come they are able to be this exception despite not having end to end encryption by default?
They've pushed out lie about storing cloud chats across different servers in different jurisdictions. Maybe that scared some prosecutors off. Or maybe FVEY is inside TG's servers too, and they don't like the idea of going after users as that would incentivize deployment of usable E2EE.
Who knows. Just use Signal.
My money is on the chats being end to end encrypted and separately uploaded to Facebook.
That's a cute loophole you thought up, but whatsapp's marketing is pretty unequivocal that they can't read your messages.
>With end-to-end encryption on WhatsApp, your personal messages and calls are secured with a lock. Only you and the person you're talking to can read or listen to them, and no one else, not even WhatsApp
That's not to say it's impossible that they are secretly uploading your messages, but the implication that they could be secretly doing so while not running afoul of their own claims because of cute word games, is outright false.
well that's alright then
facebook's marketing and executives have always been completely above board and completely honest
>That's not to say it's impossible that they are secretly uploading your messages, but the implication that they could be secretly doing so while not running afoul of their own claims because of cute word games, is outright false.
And humans aren't great at keeping secrets.
So, if the claim is that there's a bunch of data, but everyone who is using it to great gain is completely and totally mum about it, and no one else has ever thought to question where certain inferences were coming from, and no employee ever questioned any API calls or database usage or traffic graph.
Well, that's just about the best damn kept secret in town and I hope my messages are as safe!
And I'm no fan of Meta...
AFAIK that was a separate app, and it was pretty clear that it was MITMing your connections. It's not any different than say, complaining about how there weren't any whistleblowers for fortinet (who sell enterprise firewalls).
>scanning other installed mobile applications
Source?
They most likely mean their service or their employees, but this appears to be marketing fluff and not an enforceable statement.
There's the conspiracy theory about mentioning a product near the phone and then getting ads for it (which I don't believe), but I feel like I've mentioned products on WhatsApp chats with friends and then got an ad for them on Instagram sometime after.
Also claiming "no one else can read it" is a bit brave, what if the user's phone has spyware that takes screenshots of WhatsApp... (Technically of course it's outside of their scope to protect against this, but try explaining that to a judge who sees their claim and the reality)
You mention something so you're thinking about it, you're thinking about it probably because you've seen it lately (or it's in the group of things local events are making you think about), and then later you notice an ad for that thing and because you were thinking about it actually notice the ad.
It works with anything in any media form. Like I've had it where I hear a new thing and suddenly it turns up in a book I'm reading as well. Of course people discount that because they don't suspect books of being intelligent agents.
EDIT: Baader-Meinhof phenomenon. I Think anyone can be forgiven for not remembering that name.
Well you sure as hell should. Both Google and Apple are making class action settlement payments right now for this very thing.
https://www.bbc.com/news/articles/c4g38jv8zzwo
https://www.nbcchicago.com/news/local/payments-begin-in-95m-...
https://www.404media.co/heres-the-pitch-deck-for-active-list...
>https://www.nbcchicago.com/news/local/payments-begin-in-95m-...
Both are for voice assistants that inadvertently got activated. Extending it to imply that they're intentionally deceiving their users is a stretch.
>https://www.404media.co/heres-the-pitch-deck-for-active-list...
It's a pitch deck. For how skeptical HN is about AI startups or whatever, it seems pretty strange to take this at face value.
If you say so. They directly profited from it.
I wonder what the eula says.
If Facebook says it, then... Sorted!
"What encryption do you use?"
"DES."
That's not how encryption works at all. At least not any encryption used in the last 100 years.
You'd probably have to go all the way back to the encryption methods of the Roman empire for that statement to make sense
I'd wager most of these comments are from nontechnical people, or technical people that are very far removed from security.
WhatsApp is constantly RE'd because it'd be incredibly valuable to discover gaps in its security posture, the community would find any exfil here.
Mobile applications are outside my domain so I am surprised platform security (SEL, attestation, etc.) has been so easily defeated.
When you get a new phone, all you need is your phone number to retrieve the past chats from backup; nothing else. That proves, regardless of specifics, that Meta can read your chats - they can send it to any new phone.
So it doesn’t really matter that it is E2EE in transit - they just have to wait for the daily backup, and they can read it then.
I am not familiar with the state of app RE. But between code obfuscators and the difficulty of distinguishing between 'normal' phone home data and user chats when doing static analysis... I'd say it's not out of the question.
The difference between source code in a high-level language, and AArch64 machine language, is surmountable. The effort is made easier if you can focus on calls to the crypto and networking libraries.
Understanding program flow is very different from understanding the composition of data passing though the program.
As GP alludes, you would be looking for a secondary pathway for message transmission. This would be difficult to hide in AArch64 code (from a skilled practitioner), and extra difficult in decompiled Java.
It would be "easy" enough, and an enormous prize, for anyone in the field.
> a secondary pathway for message transmission
That's certainly the only way messages could be uploaded to Facebook!
I've done this work on other mobile apps (not WhatsApp), and the work is not out of the ordinary.
It's difficult to hide subtleties in decompiled code. And anything that looks hairbally gets special attention, if the calling sites or side effects are interesting.
(edit for edit)
> That's certainly the only way messages could be uploaded to Facebook!
Well, there's a primary pathway which should be very obvious. And if there's a secondary pathway, it's probably for telemetry etc. If there are others, or if it isn't telemetry, you dig deeper.
All secrets are out in the open at that point. There are no black boxes in mobile app code.
Seems like a good channel upon which to piggyback user data. Now all you have to do is obfuscate the serialization.
> It's difficult to hide subtleties in decompiled code.
Stripped, obfuscated code? Really? Are we assuming debug ability here?
> All secrets are out in the open at that point. There are no black boxes in mobile app code.
What about a loader with an encrypted binary that does a device attestation check?
Obfuscated code is more difficult to unravel in its orginal form than the decompiled form. Decompiled code is a mess with no guideposts, but that's just a matter of time and patience to fix. It's genuinely tricky to write code that decompiles into deceptive appearances.
Original position is that it'd be difficult to hide side channel leakage of chat messages in the WhatsApp mobile app. I have not worked on the WhatsApp app, but if it's anything like the mobile apps I have analyzed, I think this is the correct position.
If the WhatsApp mobile apps are hairballs of obfuscation and misdirection, I would be a) very surprised, and b) highly suspicious. Since I don't do this work every day any more, I haven't thought much about it. But there are so many people who do this work every day, and WhatsApp is so popular, I'd be genuinely shocked if there were fewer than hundreds of people who have lightly scanned the apps for anything hairbally that would be worth further digging. Maybe I'm wrong and WhatsApp is special though. Happy to be informed if so.
If governments of various countries have compelled Meta to provide a backdoor and also required non-disclosure (e.g. a TCN secretly issued to Meta under Australia's Assistance and Access Act), this is how I imagined they would do it. It technically doesn't break encryption as the receiving device receives the encrypted message.
This is what I've suspected for a long time. I bet that's it. They can already read both ends, no need to b0rk the encryption. It's just them doing their job to protect you from fourth parties, not from themselves.
So it would be trivial to encrypt to the NSA key also, as done on Windows.
My pet conspiracy theory is that the "backup code" which "restores" encrypted messages is there to annoy you into installing the app instead of chatting on the web.
https://support.apple.com/en-us/122234
In fact on this page they still claim iMessage is end-to-end encrypted.
...assuming you have icloud backups enabled, which is... totally expected? What's next, complaining about bitlocker being backdoored because microsoft can read your onedrive files?
What does that even mean? Suppose icloud backups doesn't exist, but you could still take screenshots and save them to icloud drive. Is that also "Apple has always been able to read encrypted iMessage messages"? Same goes for "other people having icloud backups enabled". People can also snitch on you, or get their phones seized. I feel like people like you and the article author are just redefining the threat model of E2EE apps just so they can smugly go "well ackshually..."
It's not hard to understand why Apple uploading every message to themselves to read by default is different from somebody intentionally taking a screenshot of their own phone.
Is icloud backups opt in or opt out? If it's opt in then would your objection still hold?
What would resolve my objection is if Apple either made messages backups E2EE always, as Google did and as Apple does themselves for other data categories like Keychain passwords, or if they excluded E2EE conversations (e.g. from ADP people) from non-E2EE backups, as Telegram does. Anything short of that does not qualify as E2EE regardless of the defaults, and marketing it as E2EE is false advertising.
TUESDAY, NOVEMBER 25, 2025 Blind Item #7 The celebrity CEO says his new chat system is so secure that even he can't read the messages. He is lying. He reads them all the time.
> the idea that WhatsApp can selectively and retroactively access the content of [end-to-end encrypted] individual chats is a mathematical impossibility
> Steven Murdoch, professor of security engineering at UCL, said the lawsuit was “a bit strange”. “It seems to be going mostly on whistleblowers, and we don’t know much about them or their credibility,” he said. “I would be very surprised if what they are claiming is actually true.”
No one apart from the firm filing the lawsuit is actually supporting this claim. A lot of people in this thread seem very confident that it's true, and I'm not sure what precisely makes them so confident.
It is not a mathematical impossibility in any way.
For example they might be able to read the backups, the keys might be somehow (accidentaly or not) leaked...
And then the part about Telegram not having end2end encryption? What's this all about?
The UI of Element (the most popular Matrix client) is more or less in line with any other chat app, but I guess it depends what you mean by "on par to whatsapp". Biggest downside I've found is that you can't search your messages on the mobile clients.
- WhatsApp encryption is broken
- EU's and UK's Chat Control spooks demand Meta to insert backdoor because they cannot break the encryption
The Guardian has its own editorial flavour on tech news, so expect them to use any excuse to bash the subject.
Those are not law, so no the EU doesnt demand that
For WhatsApp they claim it is like Signal, with the caveat that if you have backups enabled it works like Messenger. Although interestingly if you have backups enabled the key may be stored with Apple/Google rather than Meta, it might be the case that with backup enabled your phone vendor can read your WhatsApp messages but Facebook cannot.
One area of exposure was push notifications. I wonder if the access described wasn’t to the messages themselves but content rich notifications.
If so, both parties could be ~correct. Except the contractors would have been seeing what is technically metadata.
0. https://www.propublica.org/article/how-facebook-undermines-p...
If they're not credibly audited, then yeah.
Exactly who has the ability to decrypt the backup is not totally clear.
It may be a different situation for non-Android users, Android users who are not signed in with a Google account, Android users who are not using Google Play Services, etc.
I remember that you had to extract at least two keys from the android device to be able to read "on-device" chat storage in the days of yore, so the tech is there.
If you don't have the keys' copies in the Google Drive side, we can say that they are at least "superficially" encrypted.
...that telegram is backdoored by the russians? The implication you're trying to make seems to be that russians must be choosing telegram because it's secure, but are ignoring the possibility that they're choosing telegram because they have access to it. After all, you think they want the possibility of their military scheming against them?
Perhaps you're simply struggling with the concepts here: would it help you to understand things better to add that russia and the US ban the use of Signal by their militaries and intelligence services?
Did you detect an implication that can't be extrapolated from the text without metacontext and secondary unstated axioms, or is your mind totally blank and baffled at what these data points could indicate?
Ideally, WhatsApp would fully support third-party open-source clients that can ensure that the mathematics are used as intended.
Nowadays all of the messaging pipeline on my phone is closed source and proprietary, and thus unverifiable at all.
The iPhone operating system is closed, the runtime is closed, the whatsapp client is closed, the protocol is closed… hard to believe any claim.
And i know that somebody’s gonna bring up the alleged e2e encryption… a client in control of somebody else might just leak the encryption keys from one end of the chat.
Closed systems that do not support third party clients that connect through open protocols should ALWAYS be assumed to be insecure.
So you're posting this from an open core CPU running on an open FPGA that you fabricated yourself, right? Or is this just a game of one-upmanship where people come with increasingly high standards for what counts as "secure" to signal how devoted to security they are?
> a client in control of somebody else might just leak the encryption keys from one end of the chat.
has nothing to do with closed/open source. preventing this requires remote attestation. i don't know of any messaging app out there that really does this, closed or open source.
also, ironically remote attestation is the antithesis of open source.
Any large scale provider with headquarters in the USA will be subject to backdoors and information sharing with the government when they want to read or know what you are doing.
Happy to bet $100 that this lawsuit goes nowhere.
Not just the USA. This is basically universal.
This type of generalized defeatism does more harm than not.
Nation state governments do have the ability to coerce companies within their territory by default.
If you think this feature is unique to the USA, you are buying too much into a separate narrative. All countries can and will use the force of law to control companies within their borders when they see fit. The USA actually has more freedom and protections in this area than many countries, even though it’s far from perfect.
> This type of generalized defeatism does more harm than not.
Pointing out the realities of the world and how governments work isn’t defeatism.
Believing that the USA is uniquely bad and closing your eyes to how other countries work is more harmful than helpful.
The OP assumption that it's just the way it is and everyone should accept their communication being compromised is the issue.
But for your data you want to absolutely keep secret? It's probably the only to guarantee someone else somewhere cannot see it, default to assume if it's remote, someone will eventually be able to access it. If not today, it'll be stored and decrypted later.
Then why are politicians wasting time and attracting ire attempting pushing it through? Same goes for UK demanding backdoors. If they already have it, why start a big public fight over it?
Wonder what large scale provider outside USA won’t do that?
Thats just wrong. Signal for example is headquartered in the US and does not even have this capability (besides metadata)
Personally, I would never trust anyone big enough that it(in this case Meta) need and want to be deeply entangled in politics.
> Our colleagues’ defence of NSO on appeal has nothing to do with the facts disclosed to us and which form the basis of the lawsuit we brought for worldwide WhatsApp users.
According to Meta's own voluntarily published official statements, they do not.
* FAQ on encryption: https://faq.whatsapp.com/820124435853543
* FAQ for law enforcement: https://faq.whatsapp.com/444002211197967
These representations are legally binding. If Meta were intentionally lying on these, it would invite billions of dollars of liability. They use similar terminology as Signal and the best private VPN companies: we can't read and don't retain message content, so law enforcement can't ask for it. They do keep some "meta" information and will provide it with a valid subpoenoa.
The latter link even clarifies Meta's interpretation of their responsibilities under "National Security Letters", which the US Government has tried to use to circumvent 4th amendment protections in the past:
> We interpret the national security letter provision as applied to WhatsApp to require the production of only two categories of information: name and length of service.
I guess we'll see if this lawsuit goes anywhere or discovery reveals anything surprising.
“The U.S. investigates” unfortunately does not mean as much as it used to. That said, I would rest easy in the knowledge that someone deep in the NSA already knows with absolute certainty whether the WhatsApp client app is doing anything weird. But they’re not likely to talk to a reporter or plaintiffs lawyer.
“everything I ever do can be used against me in court”
…then you are not up-to-date with the latest state of society
Privacy is the most relevant when you are in a position where that information is the difference between your life or your death
The average person going through their average day breaks dozens of laws because the world is a Kafkaesque surveillance capitalist society.
The amount of information that exists about there average consumer is so unbelievably godly such that any litigator could make an argument against nearly any human on the planet that they are in violation of something if there is enough pressure
If you think you’re safe in this society because you “don’t do anything wrong“ then you’re compromised and don’t even realize it
No end-to-end encryption for groups. WhatsApp has.
No end-to-end encryption on desktop. WhatsApp has.
No break-in key-recovery. WhatsApp has.
Inferring Telegram's security from public statements of *checks notes* former KGB officer and FSB director -- agencies that wrote majority of the literature in maskirovka, isn't exactly reliable, wouldn't you agree?
Everything regarding encrypted messaging is downstream of the reality that it’s better for UX for the app developer to own the keys. Once developers have the keys, they’re going to be compelled by governments to provide them when warrants are issued. Force and violence, not mathematical proofs, are the ultimate authority.
It’s fun to get into the “conspiratorial” discussions, like where the P-256 curve constants came from or whether the HSMs have backdoors. Ultimately, none of that stuff matters. Users don’t want their messages to go poof when their phone breaks, and governments will compel you to change whatever bulletproof architecture you have to better serve their warrants.
Lol, Fox guarding the hen house.
As someone wisely pointed out in this thread, the reason Facebook is doing this is: "it's for favor trading and leverage at the highest levels."
We'll either turn off that software penalty or merge the thread into a submission of the original Bloomberg source - these things take a bit of time to sort through!
Edit: thread merged from https://news.ycombinator.com/item?id=46836487 now.
Thank you for the insight as to why it happened.
The PIN interface is also an HSM on the backend. The HSM performs the rate limiting. So they'd need a backdoor'd HSM.
However, most users can't be bothered to choose such a PIN. In this case they choose a 4 or 6 digit pin.
To mitigate the risk of brute force, the PIN is rate limited by an HSM. The HSM, if it works correctly, should delete the encryption key if too many attempts are used.
Now sure, Meta could insert itself between the client and HSM and MITM to extract the PIN.
But this isn't a Meta specific gap, it's the problem with any E2EE system that doesn't require users to memorize a master password.
I helped design E2EE systems for a big tech company and the unsatisfying answer is that there is no such thing as "user friendly" E2EE. The company can always modify the client, or insert themselves in the key discovery process, etc. There are solutions to this (decentralized app stores and open source protocols, public key servers) but none usable by the average person.
Every time you sign in to the web interface or resign into the app you enter it. I don’t remember an option for an alphanumeric pin or to offload it to a third party.
The Messenger PIN is rate limited by an HSM, you merely enter it through the web interface.
Of course, the HSM could be backdoored or the client could exfil the secret but the latter would be easy to discover.
Harder to do any better here without making the user memorize a master password, which tends to fail miserably in real life.
Sure, Meta can obviously read encrypted messages in certain scenarios:
- you report a chat (you're just uploading the plaintext)
- you turn on their AI bot (inference runs on their GPUs)
Otherwise they cannot read anything. The app uses the same encryption protocol as Signal and it's been extensively reverse engineered. Hell, they worked with Moxie's team to get this done (https://signal.org/blog/whatsapp-complete/).
The burden of proof is on anyone that claims Meta bypassing encryption is "obviously the case."
I am really tired of HN devolving into angry uninformed hot takes and quips.
Zuck thinks we're "dumb fucks". That's his internet legacy. Copying products, buying them up, wiping out competition
https://www.msn.com/en-in/money/news/meta-ceo-mark-zuckerber...
"While Zuckerberg reportedly wanted to prevent "explicit" conversations with younger teens, a February 2024 meeting summary shows he believed Meta should be "less restrictive than proposed" and wanted to "allow adults to engage in racier conversation on topics like sex." He also rejected parental controls that would have let families disable the AI feature entirely. Nick Clegg, Meta's former head of global policy, questioned the approach in internal emails, asking if the company really wanted these products "known for" sexual interactions with teens, warning of "inevitable societal backlash."
Put differently: even if you don't owe megacorps that don't follow basic human decency better, you owe this community better if you're participating in it.
I don't have any proof that Meta stores WhatsApp messages but I feel it in my bones that at the very least tried to do so. And if ever that comes to light, precisely nobody will be surprised.
The amount of ambient cynicism on the internet basically makes this a meaningless statement. You could plausibly make the same claim for tons of other conspiracy theories, eg. JFK was assassinated by the CIA/FBI, Bush did 9/11, covid was intentionally engineered by the chinese/US government, etc.
On the other hand, Occam's razor can barely keep up with the mental gymnastics required to paint Bush (or even Cheney) as the mastermind behind 9/11.
The evidence is pretty clear that Facebook wants to do everything they legally can to track and monitor people, and they're perfectly okay crossing the line and going to court to find the boundaries.
Using a company like that for encrypted messaging seems like an unnecessary risk. Maybe they're not decrypting it, but they're undoubtedly tracking everything else about the conversation because that's what they do.
And I’m not even getting into the obvious negative social/political repercussions that have come directly from Facebook and their total lack of accountability/care. They make the world worse. Aside from the inconvenience for hobbyist communities and other groups, all of which should leave Facebook anyway, we would lose nothing of value if Facebook was shut down today. The world would get slightly better.
No, the best (and also most likely) outcome is you using a VPN and nothing happens, like 99.9% of pirates out there.
>Literally nothing happened.
Isn't there a lawsuit in the works?
Edit: they already won their first case in June against authors. I am very curious to see how that lawsuit goes. Obviously we don’t know the results yet but I would be incredibly surprised to see them lose and/or have to “undo” the training. That’s a difficult world to imagine, especially under the current US admin. Smart money is the damage is done and they’ll find some new way to be awful or otherwise break rules we can’t.
Is there any indication they didn't use a VPN? If they did use a VPN, how is it "there are rules for me and not them", given that anyone can also use VPN to pirate with impunity?
If there's some two tier treatment of Meta, it's that they're being sued more aggressively than the average pirate. If you pirated Stranger Things, you can blab all you want about your copyright infringement escapades, and you'll unlikely never face any legal consequences. OTOH once word got out that Meta torrenting books, every copyright lawyer out there is going after them.
The true wealthy live by an entirely different set of rules than the rest of us, especially when they are willing to prostrate themselves to the US President.
This has always been true to some degree, but is both more true than ever (there used to be some limits based on accepted decorum) plus they just dont even try to hide it anymore.
WhatsApp has been reverse engineered extensively, they worked with Moxie's team to implement the same protocol as Signal, and you can freely inspect the client binaries yourself!
If you're confident this is the case, you should provide a comment with actual technical substance backing your claims.
The tricky part would be doing it and not getting caught though.
I need to either enter my password or let the app access my iCloud Keychain to let it derive the backup encryption key.
It's also well known that they worked with the Moxie's team to implement the same E2EE protocol as Signal. So messages are E2EE as well.