My biggest concern is long-term censorship risk. I can imagine the list of trusted CAs getting whittled down over the next several decades (since it's a privilege, not a right!), until it's small enough for all of them to collude or be pressured into blocking some particular person or organization.
My point is, lack of options, aka availability, is (or may be perceived as) dangerous on multiple layers of of WebPKI.
There are already plenty of CAs across the pond.
(That of course had to be amended, because some of those additional requirement were actually good ideas like CT, and there should be room for legitimate innovation like shorter certs, and also it's OK for browsers to do suffifcient oversight for TSPs breaking the rules, like the ongoing delrev problem).
2) You don't actually need to build a browser to achieve this goal, you just need a root program, and a viable (to some extent) substitute already exists. cf. all "Qualified" stuff in EU lingo. So again why do the work and risk spectacular failure if you don't need to.
3) Building alternative browser for EU commerce that you'd have to use for single market, but likely wouldn't work for webpages from other countries would be a bad user experience. I know what I'm sayig, I use Qubes and I've got different VMs with separate browser instances for banking etc. I'm pretty sure most people wouldn't like to have a similar set up even with working clipboard.
There are things you can't achieve by regulation, e.g. Galileo the GPS replacement, which you can't regulate into existence. Or national clouds: GDPR, DSA, et al. won't magically spawn a fully loaded colo. Those surely need to be built, but another Chromium derivative would serve no purpose.
But if EC can legislate e-signatures into existence, then it follows that they can also legislate browsers into accepting Q certs, can they not?
Mind you, they did exactly that with document signing. They made a piece of paper say three things: 1) e-signatures made by private keys matching Qualified™ Certificates are legally equivalent to written signatures, 2) all authorities are required to accept e-signatures, 3) here's how to get qualified certificates.
Upon reading this enchated scroll, 3) magically spawned and went to live its own way. ID cards issued here to every citizen are smartcards preloaded with private keys for which you can download an X.509 cert good for everydoy use. The hard part was 2), because we needed to equip and retrain every single civil servant, big number of them were older people not happy to change the way they work. But it happened.
So if the hard part is building and the easy part is regulating, and they have prior art already exercised, then why bother competing with Google, on a loss leader, with taxpayer funds. And with non-technical feature, but a regulatory one, which would most likely case the technical aspects like performance and plugin availability to be neglected.
Personally: I'm for anything that takes leverage away from the CAs.
> Personally: I'm for anything that takes leverage away from the CAs.
You can automate trusted third parties all you want, but in the end you'll have trusted third parties one way or another (trust meshes still have third parties), and there. will. be. humans. involved.
In fact, if just Letsencrypt turned bad for some reason, it's already enough to break the CA system, whether browsers remove it or not.
Is this tautology helpful? For sure it's commonly used, but I honestly have a hard time seeing what information it conveys in cases like this.
Now of course the issue is that the information can't be encoded into the bundle, but I'm saying that's a bug and not a feature.
Can it not? It seems like this SCTNotAfter constraint is effectively an API change of the root CA list that downstream users have to in some way incorporate if they want their behavior to remain consistent with upstream browsers.
That doesn't necessarily mean full CT support – they might just as well choose to completely distrust anything tagged SCTNotAfter, or to ignore it.
That said, it might be better to intentionally break backwards compatibility as a forcing function to force downstream clients to make that decision intentionally, as failing open doesn't seem safe here. But I'm not sure if the Mozilla root program list ever intended to be consumed by non-browser clients in the first place.
That's what the blog post I linked in the top comment suggests is the "more disruptive than intended" approach. I don't think it's a good idea. Removing the root at `SCTNotAfter + max cert lifetime` is the appropriate thing.
There's an extra issue of not-often-updated systems too, since now you need to coordinate a system update at the right moment to remove the root.
Note that Mozilla supports not SCTNotAfter but DistrustAfter, which relies on the certificate's Not Before date. Since this provides no defense against backdating, it would presumably not be used with a seriously dangerous CA (e.g. DigiNotar). This makes it easy to justify removing roots at `DistrustAfter + max cert lifetime`.
On the other hand, SCTNotAfter provides meaningful security against a dangerous CA. If Mozilla begins using SCTNotAfter, I think non-browser consumers of the Mozilla root store will need to evaluate what to do with SCTNotAfter-tagged roots on a case-by-case basis.
Yet that is the thing that goes around under the name "ca-certificates" and practically all non-browser TLS on Linux everywhere is rooted in it! Regardless of what the intent was, that is the role of the Mozilla CA bundle now.
I wonder how much I should be concerned about Mozilla's trust store's trustworthiness, given their data grab with Firefox? I've switched to LibreWolf over that (more in protest than thinking I'm personally being targeted). But I'm pretty sure LibreWolf will still be using the Mozilla trust store?
I haven't thought through enough to understand the implications of the moneygrubbing AI grifters in senior management positions at Mozilla being in charge of my TLS trust store, but I'm not filled with joy at the idea.
Certificate trust really should be centralized at the OS level (like it used to be) and not every browser having its own, incompatible trusted roots. It's arrogance at its worst and it helps nobody.
When are you imagining this "used to be" true? This technology was invented about thirty years ago, by Netscape, which no longer exists but in effect continues as Mozilla. They don't write an operating system (then or now) so it's hard to see how this is "centralized at the OS level".
Firefox has their own trusted list, but still supports administrator-installed OS CA certificates by default, as far as I know (but importantly not OS-provided ones).
Https certificate trust is basically the last thing I think about when I choose an os. (And for certain OSes I use, I actively don't trust its authors/owners)
Which makes sense, because that would require them all to relinquish some power to their little corner of the Internet, which they are all unwilling to do.
This fuckery started with Google, dissatisfied with not having total control over the entire Internet, deciding they're going to rewrite the book for certificate trust in Chrome only (turns out after having captured the majority browser market share and having a de-facto monopoly, you can do whatever you want).
I don't blame Mozilla having their own roots because that is probably just incompetence on their part. It's more likely they traded figuring out interfacing with OS crypto APIs for upkeep on 30 year old Netscape cruft. Anyone who has had to maintain large scale deployments of Firefox understands this lament and knows what a pain in the ass it is.
(As an erstwhile pentester, btw, fuck the OS certificate store; makes testing sites a colossal pain).
i already mentioned that ("may or may not"). former or latter, per-app CA management is an abomination from security and administrative perspectives. from the security perspective, abandonware (i.e. months old software at the rate things change in this business) will become effectively "bricked" by out-of-date CAs and out-of-date revocation lists, forcing the users to either migrate (more $$$), roll with broken TLS, or even bypass it entirely (more likely); from the administrative perspective, IT admins and devops guys will have to wrangle each application individually. it raises the hurdle from "keep your OS up-to-date" to "keep all of your applications up-to-date".
> As an erstwhile pentester
exactly. you're trying to get in. per-app config makes your life easier. as an erstwhile server-herder, i prefer the os store, which makes it easier for me to ensure everything is up-to-date, manage which 3rd-party CAs i trust & which i don't, and cut 3rd-parties out-of-the-loop entirely for in-house-only applications (protected by my own CA).
Can you please explain? I'm just curious, not arguing.
It's a web-of-trust and/or TOFU model if you look at it closely. These have different tradeoffs from the PKI, but don't somehow magically solve the hard problems of trusted key discovery.
That's not to say that domain typo attacks aren't a real problem, but memorizing an Onion link is entirely impossible. Domains exploiting typos or using registered/trademarked business names can also often be seized through legal means.
Or maybe we return to using bookmarks? Not sure exactly.
The client is perfectly able to verify that when connecting without a central authority by querying a well-known DNS entry. Literally do what the CA does to check but JIT.
This does leave you vulnerable to a malicious DNS server but this isn't an impossible hurdle without re-inventing CAs. With major companies rolling out DoH all you care about is that your DNS server isn't lying to you. With nothing other than dnsmasq you can be your own trusted authority no 3rd party required.
The essence of TOFU is that each party initiating a connection individually makes a trust decision, and these decisions are not delegated/federated out. PKI does delegate that decision to CAs, and them using an automated process does not make the entire system TOFU.
Yes, clients could be doing all kinds of different things such as DANE and DNSSEC, SSH-like TOFU etc., but they aren't, and the purpose of a system is what it does (or, in this case, doesn't).
Yes, that's the entire scope of the web PKI, and with the exception of EV certificates it never was anything else.
> it's pretty clear that with ACME and DNS challenges we don't need a huge centralized system to do this much weaker thing
Agreed – what we are doing today is primarily a result of the historical evolution of the system.
"Why don't we just trust the DNS/domain registry system outright if we effectively defer most trust decisions to it anyway" is a valid question to ask, with some good counterpoints (one being that the PKI + CT make every single compromise globally visible, while DANE does not, at least not without further extensions).
My objection is purely on terminology: Neither the current web PKI nor any hypothetical DANE-based future would be TOFU. Both delegate trust to some more or less centralized entity.
The main limitation right now is browser support. Browsers _only_ support CAs, so CAs continue being the norm.
Another problem with DNSSEC is that the root is controlled by the United States. If we start relying on DNSSEC, America gains the power to knock out entire TLDs by breaking the signature configuration. Recent credible threats of invading friendly countries should make even America's allies fearful for extending digital infrastructure in a way that gives them any more power.
We've spent a decade and a half slowly making the Web PKI more agile and more transparent by reducing key lifetimes, expanding automation support, and integrating certificate transparency.
None of that exists for DNS, largely by design.
The only real downsides are that DNSSEC doesn't have CT yet (that'd be nice), this adds latency, and larger DNS messages can be annoying.
[0] QName minimization means if if you're asking for foo.bar.baz.example. you'll ask . for example. then you'll ask example. for baz.example. and so on, detecting all the zone cuts yourself. As opposed to sending the full foo.bar.baz.example. query to . then example. and so on. If you minimize the query then . doesn't get to know anything other than the TLD you're interested in, which is not much of a clue as to whether an evil . should MITM you. Now because most domainnames of interest have only one or two intermediate zones (a TLD or a ccTLD and one below that), and because those intermediates are also run by parties similar to the one that runs the root, you might still fear MITMing.
But you can still use a combination of WebPKI and DANE, in which case the evil DNSSEC CAs would have to collaborate with some evil WebPKI CA.
Ultimately though DNSSEC could use having CT.
The more feasible CA-free architecture is to have the browser operator perform domain validation and counter-sign every sites key, but that has other downsides and is arguably even less distributed.
Is the tor node you're accessing the real Facebook or just a phishing page intercepting your credentials? Better check if the 60 character domain name matches the one your friend told you about!
I don't think putting any more power in browser vendors is the right move. I'd rather see a DNSSEC overhaul to make DANE work.
Why on Earth would I trust some "site owner" (who are they? how do I authenticate them?) to operate an "onion service" securely and without abusing me? Do you not have a circular reasoning problem here?
> All the traffic is always encrypted and you don't have to trust anyone for it
Sure you do! You have to trust the other end, but since this is onion routing you have to trust many ends.
For CAs who get distrusted due to reasons which imply their issued certs might not be trustworthy, either revoke the certs individually or use a separate "never trust" list.
However it has a tradeoff: you cannot remove any certificate from a CRL if revoked, ever. Under normal circumstances certificates naturally expire so only need to remain in CRLs if revoked before then, but with timestamps they are valid indefinitely so must be explicitly marked invalid.
Now people have come to realize a cert basically ties a service to a domain name and that is basically the best you can do in most cases.
I really liked that functionality, it made sense to me
https://en.wikipedia.org/wiki/Man-in-the-middle_attack
If the CA's are doing their jobs...then FirstBank.com can get a cert for their web site. But the gang who rooted a bunch of home routers (or hacked a WiFi setup, or whatever) can't.
If not...then yeah, that's the problem.