But perhaps I've misunderstood you.
You're missing the point.
Anybody can download $insert_name_of_your_favourite_software and use that to encrypt data before uploading to cloud storage.
The point here is the discussion about multi-tenant cloud-based solutions. You know, the sort of thing you use in a work environment when you share documents with your colleagues.
In that context, the DIY $my_favourite_tool pre-encrypt route is simply not feasable or scalable and would be hell on earth to manage and maintain.
[1] https://proton.me/about/team
[2] https://steigerlegal.ch/2019/07/27/protonmail-transparenzber...
> To me, this smells like Crypto AG
It's easy to throw around unsubstantiated, impossible to disprove theories.
idk if they have anything like that for their other products like calendar or file storage
Presumably if you stick to mobile apps you won't be using JavaScript served by their server? Unless they're just html wrappers
The bridge looks good, though it seems really shady that it's not open source. I'd expect it to definitely be open.
Interesting breakdown[1] of one of the claims that E2E encryption on ProtonMail is broken.
I’m assuming that Proton storage is a product from the same team as ProtonMail.
There was some deanonymizing like that, phone or credit card
Honestly, this FUD goes round more often than the seasons.
As with most FUD, I'm still waiting for someone to prove it.
And no, I don't buy the "smells like crypto AG" FUD, because you could use that sort of FUD for any of the US-cloud providers ....
For example, when AWS says "trust us" in relation to their KMS or HSM services, can you, really ... eh eh eh .... how do you know KMS or HSM isn't just a software proxy that pretends to be what it is ? :)
The fact is that if you are going to use someone else's servers to do something for you. Whether that someone is Proton or AWS or anyone else. You are, by definition, forced to abstract away your trust boundaries.
dropbox has been mentioned in the article and I think the author is drinking kool-aid and throwing random facts
[1] https://blog.dropbox.com/topics/company/new-solutions-to-sec...
The keys stay on the client. It is secure, but means the files are only decryptable on the client, so keys need to be shared manually. I guess security means extra hassle.
The two vulnerabilities they found seem pretty far-fetched to me, basically the first is that a compromised CA server will be able to create fake public keys, which I honestly don't know how one could defend against? Transparency logs maybe but even that wouldn't solve the issue entirely when sharing keys for the first time. The second one around unencrypted metadata is hard to assess without knowing what metadata is affected, it seems that it's nothing too problematic.
I would still (for now, at least) trust Tresorit over any of the US jurisdiction services. I wouldn't put my data on US jurisdiction servers no matter how much money you gave me.
I am, for now, tempted to say we should get a detailed explanation from Tresorit before jumping to conclusions.
It seems to me the author of the website made many assumptions, it is not clear if they entered into any sort of meaningful dialogue before publishing.
> any attempt to share a directory allows the server to share that directory with itself
Surely this is by definition required ?
If you wish to share a file or a directory with somebody external from your organisation via a simple link. How, exactly, do you envisage that happening without granting the Tresorit server permission to be the intermediary ?
Sure, you could, theoretically, mandate those third-party people to install software on their devices, or to register an account or whatever. But let's face it, in the real world, if you want to share a file or directory as a one-off with someone ? And forcing people to do extra steps for a one-off share is just introducing friction. Also some people can't install random software on their computers due to corporate policies.
[1] https://www.linkedin.com/posts/tresorit_end-to-end-encrypted...
Maybe one reason why cloud providers aren't pushing it that heavily. Especially the big players, since more data = more duplication = more efficient deduplication.
My bookmark archive is 10TB but deduped on-disk size is 100GB because most files are the same across backups!
It’s not hard to encrypt it before you upload it.
Looks like the safest bet is still to just tar everything and encrypt/sign the result in one go.
I wonder how vulnerable eg. Linux filesystem level encryption is to these kinds of attacks...
Unauthenticated Key Material
Unauthenticated Public Keys
attacks.That's generally understood to be not feasible.
It's more than that. Simply incrementing your way through a 256 bit counter is impossible by the thermodynamic cost alone.
[1] https://security.stackexchange.com/questions/14068/why-most-...
You can even have no key schedule at all and just make your AES key size in bits = 128 * num_of_rounds. This doesn't mean that the bruteforce complexity is going to be that high but that would hardly matter...