169 pointsby tptacek3 days ago7 comments
  • mcpherrinm3 days ago
    The flipside of the same technical points is https://sslmate.com/blog/post/entrust_distrust_more_disrupti... where some non-browser clients don't handle this, or worse, handle it incorrectly.
    • tptacek3 days ago
      Right; it's imperfect, as everything is. But of course, it's also a huge bit of leverage for the root programs (more accurately, a loss of leverage for CAs) in killing misbehaving CAs; those programs can't be blackmailed with huge numbers of angry users anymore, only a much smaller subset of users. Seems like a good thing, right?
      • LegionMammal9783 days ago
        Only insofar as you trust the root programs to use their leverage responsibly, both today and in the medium-to-long-term future.
        • tptacek3 days ago
          The operators of the root programs control the browsers (and, in some cases, the operating systems!), so this doesn't make much sense.
          • LegionMammal9783 days ago
            Any leverage against the root programs in the form of angry users is also leverage against the browser devs, is it not? Either way you look at it, the root programs/browser devs receive less pushback and gain more autonomy.

            My biggest concern is long-term censorship risk. I can imagine the list of trusted CAs getting whittled down over the next several decades (since it's a privilege, not a right!), until it's small enough for all of them to collude or be pressured into blocking some particular person or organization.

            • tptacek3 days ago
              Your concern is that there will be... too few CAs?
              • LegionMammal9783 days ago
                Yes. It would add another potential point of failure to the process of publishing content on the Web, if such a scenario came to pass. (Of course, the best-case scenario is that we retain a healthy number of CAs who can act independently, but also maintain compliance indefinitely.)
                • tptacek3 days ago
                  I don't know what to say; I think that's the first time I've heard that concern on HN. Thanks!
                  • throw_a_grenade2 days ago
                    To add to this, EU's 2021 eIDAS (the one with mandatory trustlist) was a response to similar lack of availability. Contrary to what most HNers instincively thought it wasn't about interception: EC was annoyed that none of root programs is based in EU, and that causes 100% of trust decisions to be made on the other side of the Big Water. EC felt a need to do something about in, having in regard the fact that TLS certificates are needed for modern business, healthcare, finance etc., they have seen it as economy sovereignity issue.

                    My point is, lack of options, aka availability, is (or may be perceived as) dangerous on multiple layers of of WebPKI.

                    • dadrian2 days ago
                      No, eIDAS 2.0 was an attempt to address the fact that the EU is not one market in ecommerce, because EU citizens don't like making cross-border orders. The approach to solving this was to attach identity information to sites, ala EV certificates. The idea for this model came from the trust model for digital document signatures in PDFs.

                      There are already plenty of CAs across the pond.

                      • throw_a_grenade2 days ago
                        That's orthogonal problem. eIDAS had to solve many problems to create full solution. You're right that we have many TSPs (aka CAs), NABs also. EU have experience running continent-wide PKI for e-signatures that are accepted in other countries. But no root programs in WebPKI, which were essentially unaccountable to EU, but a critical link in the chain in end-to-end solution. There's was no guarantee that browser vendors won't establish capricious requirements for joining root programs (i.e. ones that would be incompatible with EU law and would exclude European TSPs). Therefore the initial draft stated that browsers need to import EU trustlist wholesale and are prohibited from adding their own requirements.

                        (That of course had to be amended, because some of those additional requirement were actually good ideas like CT, and there should be room for legitimate innovation like shorter certs, and also it's OK for browsers to do suffifcient oversight for TSPs breaking the rules, like the ongoing delrev problem).

                        • tptacek2 days ago
                          Serious question: if the EU wants a root program they control, shouldn't step one be building a browser that anybody wants to use?
                          • throw_a_grenade2 days ago
                            1) From an eurocrat pov, why build a browser when you can regulate the existing ones instead? EU core competence is regulating, not building, and they know it.

                            2) You don't actually need to build a browser to achieve this goal, you just need a root program, and a viable (to some extent) substitute already exists. cf. all "Qualified" stuff in EU lingo. So again why do the work and risk spectacular failure if you don't need to.

                            3) Building alternative browser for EU commerce that you'd have to use for single market, but likely wouldn't work for webpages from other countries would be a bad user experience. I know what I'm sayig, I use Qubes and I've got different VMs with separate browser instances for banking etc. I'm pretty sure most people wouldn't like to have a similar set up even with working clipboard.

                            There are things you can't achieve by regulation, e.g. Galileo the GPS replacement, which you can't regulate into existence. Or national clouds: GDPR, DSA, et al. won't magically spawn a fully loaded colo. Those surely need to be built, but another Chromium derivative would serve no purpose.

                            • tptacek2 days ago
                              I feel like if you can make an Airbus, you can make a browser and a search engine.
                              • If you're talking about technical capability, yeah, no contest here.

                                But if EC can legislate e-signatures into existence, then it follows that they can also legislate browsers into accepting Q certs, can they not?

                                Mind you, they did exactly that with document signing. They made a piece of paper say three things: 1) e-signatures made by private keys matching Qualified™ Certificates are legally equivalent to written signatures, 2) all authorities are required to accept e-signatures, 3) here's how to get qualified certificates.

                                Upon reading this enchated scroll, 3) magically spawned and went to live its own way. ID cards issued here to every citizen are smartcards preloaded with private keys for which you can download an X.509 cert good for everydoy use. The hard part was 2), because we needed to equip and retrain every single civil servant, big number of them were older people not happy to change the way they work. But it happened.

                                So if the hard part is building and the easy part is regulating, and they have prior art already exercised, then why bother competing with Google, on a loss leader, with taxpayer funds. And with non-technical feature, but a regulatory one, which would most likely case the technical aspects like performance and plugin availability to be neglected.

              • cryptonector2 days ago
                If there was just one CA then there would be no CABforum and users would have no leverage. This is the situation in DNSSEC. I don't think it's that bad, as one can always run one's own . and use QName minimization, but still, com. and such TLDs would be very powerful intermediate CAs themselves. And yet I still like DNSSEC/DANE as you know, except maybe I'm liking the DNAE+WebPKI combo more. And I don't fear "too few CAs" either because the way I figure it if the TLAs compromise one CA, they can and will compromise all CAs.
                • tptacek2 days ago
                  Well, I will give you this: this is a novel take. The WebPKI and DANE, because, heck, it's all compromised anyways.

                  Personally: I'm for anything that takes leverage away from the CAs.

                  • cryptonector2 days ago
                    Well, it's u/LegionMammal978's novel take, I just riffed on it.

                    > Personally: I'm for anything that takes leverage away from the CAs.

                    You can automate trusted third parties all you want, but in the end you'll have trusted third parties one way or another (trust meshes still have third parties), and there. will. be. humans. involved.

              • marcosdumay2 days ago
                Yep. Too many CAs is a failure mode of the CA system, and too few CAs is also a failure mode of the CA system.

                In fact, if just Letsencrypt turned bad for some reason, it's already enough to break the CA system, whether browsers remove it or not.

              • throw_a_grenade2 days ago
                /cough/ VeriSign /cough/
      • Y_Y2 days ago
        > Right; it's imperfect, as everything is.

        Is this tautology helpful? For sure it's commonly used, but I honestly have a hard time seeing what information it conveys in cases like this.

    • dadrian3 days ago
      Non-browser clients shouldn't be expected to crib browser trust decisions. Also, the (presumably?) default behavior for a non-browser client consuming a browser root store, but is unaware of the constraint behavior, is to not enforce the constraint. So they would effectively continue to trust the CA until it is fully removed, which is probably the correct decision anyway.
      • DSMan1952763 days ago
        To me that's an odd position to take, ultimately if the user is using Mozilla's root CA list then they're trusting Mozilla to determine which certs should be valid. If non-browser programs using the list are trusting certs that Mozilla says shouldn't be trusted then that's not a good result.

        Now of course the issue is that the information can't be encoded into the bundle, but I'm saying that's a bug and not a feature.

        • dadrian3 days ago
          Mozilla’s list is built to reflect the needs of Firefox users, which are not the same as the needs of most non-browser programs. The availability/compatibility vs security tradeoff is not the same.
        • lxgr2 days ago
          > the information can't be encoded into the bundle

          Can it not? It seems like this SCTNotAfter constraint is effectively an API change of the root CA list that downstream users have to in some way incorporate if they want their behavior to remain consistent with upstream browsers.

          That doesn't necessarily mean full CT support – they might just as well choose to completely distrust anything tagged SCTNotAfter, or to ignore it.

          That said, it might be better to intentionally break backwards compatibility as a forcing function to force downstream clients to make that decision intentionally, as failing open doesn't seem safe here. But I'm not sure if the Mozilla root program list ever intended to be consumed by non-browser clients in the first place.

          • mcpherrinm2 days ago
            > That doesn't necessarily mean full CT support – they might just as well choose to completely distrust anything tagged SCTNotAfter, or to ignore it.

            That's what the blog post I linked in the top comment suggests is the "more disruptive than intended" approach. I don't think it's a good idea. Removing the root at `SCTNotAfter + max cert lifetime` is the appropriate thing.

            There's an extra issue of not-often-updated systems too, since now you need to coordinate a system update at the right moment to remove the root.

            • agwa2 days ago
              > Removing the root at `SCTNotAfter + max cert lifetime` is the appropriate thing.

              Note that Mozilla supports not SCTNotAfter but DistrustAfter, which relies on the certificate's Not Before date. Since this provides no defense against backdating, it would presumably not be used with a seriously dangerous CA (e.g. DigiNotar). This makes it easy to justify removing roots at `DistrustAfter + max cert lifetime`.

              On the other hand, SCTNotAfter provides meaningful security against a dangerous CA. If Mozilla begins using SCTNotAfter, I think non-browser consumers of the Mozilla root store will need to evaluate what to do with SCTNotAfter-tagged roots on a case-by-case basis.

          • yencabulator2 days ago
            > But I'm not sure if the Mozilla root program list ever intended to be consumed by non-browser clients in the first place.

            Yet that is the thing that goes around under the name "ca-certificates" and practically all non-browser TLS on Linux everywhere is rooted in it! Regardless of what the intent was, that is the role of the Mozilla CA bundle now.

    • bigiain3 days ago
      Hmmmm, speaking of distrust and Mozilla...

      I wonder how much I should be concerned about Mozilla's trust store's trustworthiness, given their data grab with Firefox? I've switched to LibreWolf over that (more in protest than thinking I'm personally being targeted). But I'm pretty sure LibreWolf will still be using the Mozilla trust store?

      I haven't thought through enough to understand the implications of the moneygrubbing AI grifters in senior management positions at Mozilla being in charge of my TLS trust store, but I'm not filled with joy at the idea.

      • dadrian3 days ago
        What actual risk are you worried about here? Mozilla changed their data policy, therefore the root store might do what...?
      • chicom_malware3 days ago
        WebPKI is a maze of fiefdoms controlled by a small group of power tripping little Napoleons.

        Certificate trust really should be centralized at the OS level (like it used to be) and not every browser having its own, incompatible trusted roots. It's arrogance at its worst and it helps nobody.

        • tialaramex3 days ago
          > (like it used to be)

          When are you imagining this "used to be" true? This technology was invented about thirty years ago, by Netscape, which no longer exists but in effect continues as Mozilla. They don't write an operating system (then or now) so it's hard to see how this is "centralized at the OS level".

          • lxgr2 days ago
            It was true for at least Chrome until around 2020: Chrome used to not ship with any trusted CA list and default to the OS for that.

            Firefox has their own trusted list, but still supports administrator-installed OS CA certificates by default, as far as I know (but importantly not OS-provided ones).

        • marky19913 days ago
          Why should it be centralized at the os level?

          Https certificate trust is basically the last thing I think about when I choose an os. (And for certain OSes I use, I actively don't trust its authors/owners)

          • tptacek3 days ago
            It is genuinely weird to think Microsoft should get a veto over your browser if that browser stops trusting a CA, right?
            • chicom_malware2 days ago
              The only thing that is genuinely weird is having four different certificate stores on a system, each with different trusted roots, because the cabals of man-children that control the WebPKI can't set aside their petty disagreements and reach consensus on anything.

              Which makes sense, because that would require them all to relinquish some power to their little corner of the Internet, which they are all unwilling to do.

              This fuckery started with Google, dissatisfied with not having total control over the entire Internet, deciding they're going to rewrite the book for certificate trust in Chrome only (turns out after having captured the majority browser market share and having a de-facto monopoly, you can do whatever you want).

              I don't blame Mozilla having their own roots because that is probably just incompetence on their part. It's more likely they traded figuring out interfacing with OS crypto APIs for upkeep on 30 year old Netscape cruft. Anyone who has had to maintain large scale deployments of Firefox understands this lament and knows what a pain in the ass it is.

            • hello_computer3 days ago
              that’s not what he meant, and you know it. he means use the OS store (the one the user has control over), instead of having each app do its own thing (where the user may or may not have control, and even if he does have it, now has to tweak settings in a dozen places instead of one). they try to pull the same mess with DNS (i.e. Mozilla’s DoH implementation)
              • tptacek2 days ago
                I don't understand, because the user has control over the browser store too.

                (As an erstwhile pentester, btw, fuck the OS certificate store; makes testing sites a colossal pain).

                • hello_computer2 days ago
                  > I don't understand, because the user has control over the browser store too.

                  i already mentioned that ("may or may not"). former or latter, per-app CA management is an abomination from security and administrative perspectives. from the security perspective, abandonware (i.e. months old software at the rate things change in this business) will become effectively "bricked" by out-of-date CAs and out-of-date revocation lists, forcing the users to either migrate (more $$$), roll with broken TLS, or even bypass it entirely (more likely); from the administrative perspective, IT admins and devops guys will have to wrangle each application individually. it raises the hurdle from "keep your OS up-to-date" to "keep all of your applications up-to-date".

                  > As an erstwhile pentester

                  exactly. you're trying to get in. per-app config makes your life easier. as an erstwhile server-herder, i prefer the os store, which makes it easier for me to ensure everything is up-to-date, manage which 3rd-party CAs i trust & which i don't, and cut 3rd-parties out-of-the-loop entirely for in-house-only applications (protected by my own CA).

                  • tptacek2 days ago
                    It's baffling to me that anyone would expect browsers to make root store decisions optimized for server-herders. You're not their userbase!
                    • neither are pentesters
                      • tptacek21 hours ago
                        Right, I don't think the pentester use case here is at all dispositive; in fact, it's approximately as meaningful as the server-herders.
                • mwcampbell2 days ago
                  > (As an erstwhile pentester, btw, fuck the OS certificate store; makes testing sites a colossal pain)

                  Can you please explain? I'm just curious, not arguing.

                  • tptacek2 days ago
                    It's a good question! When you're testing websites, you've generally got a browser set up with a fake root cert so you can bypass TLS. In that situation, you want one of your browsers to have a different configuration than your daily driver.
        • arccy2 days ago
          Unfortunately, OS vendors like microsoft are quite incompetent at running root stores https://github.com/golang/go/issues/65085#issuecomment-25699...
        • hello_computer3 days ago
          [flagged]
  • udev40962 days ago
    CAs should have been a thing of past by now. We should learn from the truly distrusted architecture of Tor onion services. There is no central authority in onion services, the site owner has full control over the site. All the traffic is always encrypted and you don't have to trust anyone for it
    • lxgr2 days ago
      You have to trust whoever you got an Onion link from, and yourself to not fall for a similar-looking one, since it is the trusted private key.

      It's a web-of-trust and/or TOFU model if you look at it closely. These have different tradeoffs from the PKI, but don't somehow magically solve the hard problems of trusted key discovery.

      • dooglius2 days ago
        You also have to trust where you get a non-onion link from?
        • lxgr2 days ago
          Yes, but non-Onion links are slightly more memorable.

          That's not to say that domain typo attacks aren't a real problem, but memorizing an Onion link is entirely impossible. Domains exploiting typos or using registered/trademarked business names can also often be seized through legal means.

          • nullpoint4202 days ago
            Can we not map regular domain names to onion links? DNS except instead of regular DNS mappings of URLS -> IP Addresses it's URLS -> Onion URLs?

            Or maybe we return to using bookmarks? Not sure exactly.

      • Spivak2 days ago
        Sure but the web is already TOFU even with CAs because the only thing being asserted is that your connected to someone who (probably) controls the domain.

        The client is perfectly able to verify that when connecting without a central authority by querying a well-known DNS entry. Literally do what the CA does to check but JIT.

        This does leave you vulnerable to a malicious DNS server but this isn't an impossible hurdle without re-inventing CAs. With major companies rolling out DoH all you care about is that your DNS server isn't lying to you. With nothing other than dnsmasq you can be your own trusted authority no 3rd party required.

        • lxgr2 days ago
          The web PKI is not TOFU in any way: Neither do most browsers offer a convenient way of persistently trusting a non-PKI-chaining server certificate across multiple site visits, nor do they offer a way to not trust a PKI-chaining certificate automatically.

          The essence of TOFU is that each party initiating a connection individually makes a trust decision, and these decisions are not delegated/federated out. PKI does delegate that decision to CAs, and them using an automated process does not make the entire system TOFU.

          Yes, clients could be doing all kinds of different things such as DANE and DNSSEC, SSH-like TOFU etc., but they aren't, and the purpose of a system is what it does (or, in this case, doesn't).

          • Spivak2 days ago
            Web PKI is not TOFU in the specific instance where you have an a priori trusted url you know about. But I, and others, argue that this isn't actually that strong of a guarantee in practice. I'm just trusting that this URL is the real life entity I think it is. The only thing you get is that you're connected to someone who has control of the domain. And it's pretty clear that with ACME and DNS challenges we don't need a huge centralized system to do this much weaker thing, you just need any DNS server you trust, it can even be yours.
            • lxgr2 days ago
              > The only thing you get is that you're connected to someone who has control of the domain.

              Yes, that's the entire scope of the web PKI, and with the exception of EV certificates it never was anything else.

              > it's pretty clear that with ACME and DNS challenges we don't need a huge centralized system to do this much weaker thing

              Agreed – what we are doing today is primarily a result of the historical evolution of the system.

              "Why don't we just trust the DNS/domain registry system outright if we effectively defer most trust decisions to it anyway" is a valid question to ask, with some good counterpoints (one being that the PKI + CT make every single compromise globally visible, while DANE does not, at least not without further extensions).

              My objection is purely on terminology: Neither the current web PKI nor any hypothetical DANE-based future would be TOFU. Both delegate trust to some more or less centralized entity.

    • some_random2 days ago
      TOR did not in fact magically solve all trust, you have to (potentially blindly) trust that the URL you've been given or you've found is who it says it is. It's not uncommon for scammers to change URLs on directories from real businesses or successful scams to their scam and there's no way to detect this if it's your first time.
    • WhyNotHugo2 days ago
      We could also be relying on DNSSEC+DANE, where the domain owner publishes TLS public keys via DNS. Without a third party CA.

      The main limitation right now is browser support. Browsers _only_ support CAs, so CAs continue being the norm.

      • jeroenhd2 days ago
        Outside of some European TLDs, DNSSEC is pretty much unused. Amazon's cloud DNS service only recently started supporting it and various companies trying to turn it on ran into bugs in Amazon's implementation and got painful downtime. Hell, there are even incompetent TLDs that have DNSSEC broken entirely with no plan for fixing it any time soon.

        Another problem with DNSSEC is that the root is controlled by the United States. If we start relying on DNSSEC, America gains the power to knock out entire TLDs by breaking the signature configuration. Recent credible threats of invading friendly countries should make even America's allies fearful for extending digital infrastructure in a way that gives them any more power.

      • dadrian2 days ago
        The main limitation is the incredibly opaque and brittle nature of putting keys in DNS.

        We've spent a decade and a half slowly making the Web PKI more agile and more transparent by reducing key lifetimes, expanding automation support, and integrating certificate transparency.

        None of that exists for DNS, largely by design.

      • cryptonector2 days ago
        Note that DNSSEC is a PKI, but it's fantastically better than a WebPKI because a) you get a single root CA, b) you can run your own private root CA (by running your own `.`), c) if clients did QName miniminzation then the CAs wouldn't easily know when it's interesting to try to MITM you. Oh, and DNS has name constraints naturally built-in while PKIX only has them as an extension that no one implements.

        The only real downsides are that DNSSEC doesn't have CT yet (that'd be nice), this adds latency, and larger DNS messages can be annoying.

        • tptaceka day ago
          The single root CA makes it fantastically worse, not better. DNSSEC will never get CT, because no entity in the world has the leverage to make that happen. The whole point of CT is that no WebPKI entity can opt out of it.
      • growse2 days ago
        Who do you think signs the DNSSEC root?
        • cryptonector2 days ago
          They can't be MITMing people left and right without getting caught. Maybe getting caught is not a problem, but still. And if you use query name minimization[0] then it gets harder for the root CA and any intermediates but the last one to decide whether to MITM you. And you can run your own root for your network.

          [0] QName minimization means if if you're asking for foo.bar.baz.example. you'll ask . for example. then you'll ask example. for baz.example. and so on, detecting all the zone cuts yourself. As opposed to sending the full foo.bar.baz.example. query to . then example. and so on. If you minimize the query then . doesn't get to know anything other than the TLD you're interested in, which is not much of a clue as to whether an evil . should MITM you. Now because most domainnames of interest have only one or two intermediate zones (a TLD or a ccTLD and one below that), and because those intermediates are also run by parties similar to the one that runs the root, you might still fear MITMing.

          But you can still use a combination of WebPKI and DANE, in which case the evil DNSSEC CAs would have to collaborate with some evil WebPKI CA.

          Ultimately though DNSSEC could use having CT.

          • tptaceka day ago
            They can absolutely MITM people left and right without getting caught.
      • immibisa day ago
        Your registrar would be able to MITM your website - and prevent you from noticing, since there's no Certificate Transparency in this case.
      • Spivak2 days ago
        You don't even need DNSSEC because CAs will happily issue you a cert without it.
      • mycall2 days ago
        Brave should try an implementation as it fits their paradigm.
        • lxgr2 days ago
          How so?
    • dadrian2 days ago
      The Tor service model is equivalent to if every site used a self-signed certificate, which doesn't scale.

      The more feasible CA-free architecture is to have the browser operator perform domain validation and counter-sign every sites key, but that has other downsides and is arguably even less distributed.

      • jeroenhd2 days ago
        The Tor system does scale, as Tor itself proves. Tor just lacks domain names all together and reuses public keys for site identification instead.

        Is the tor node you're accessing the real Facebook or just a phishing page intercepting your credentials? Better check if the 60 character domain name matches the one your friend told you about!

        I don't think putting any more power in browser vendors is the right move. I'd rather see a DNSSEC overhaul to make DANE work.

    • cryptonector2 days ago
      This is nonsense. The introduction problem can't just go away, and trust meshes, PKIs, and other schemes are all not panaceas.

      Why on Earth would I trust some "site owner" (who are they? how do I authenticate them?) to operate an "onion service" securely and without abusing me? Do you not have a circular reasoning problem here?

      > All the traffic is always encrypted and you don't have to trust anyone for it

      Sure you do! You have to trust the other end, but since this is onion routing you have to trust many ends.

  • peanut-walrus2 days ago
    By far the most logical default behavior to me is that if the issuing CA was valid and trusted when the cert was issued, the cert should be considered valid until it expires. This of course means you need a trusted timestamping service or I guess CT can fulfill the same purpose.

    For CAs who get distrusted due to reasons which imply their issued certs might not be trustworthy, either revoke the certs individually or use a separate "never trust" list.

    • zahllos2 days ago
      This is what is done for code and document signing, because you want the signed object to continue to be valid even after the certificate expires.

      However it has a tradeoff: you cannot remove any certificate from a CRL if revoked, ever. Under normal circumstances certificates naturally expire so only need to remain in CRLs if revoked before then, but with timestamps they are valid indefinitely so must be explicitly marked invalid.

      • cryptonector2 days ago
        The better thing to do for this case is to only check the code signing certificate's validity at install time rather than every time you run the code. Then you don't have to have boundlessly growing CRLs. In general checking code signatures at run-time is just a perf-killing waste of resources. Ideally secure, measured boot would also measure the OS filesystems you're booting, and that kinda requires a content-addressed storage copy-on-write type filesystem such that you can seal boot/keys to a root hash of the root filesystem, but here we are in 2025 and we don't quite have that yet. ZFS comes closest, though it's not a CAS FS, but somehow the last mile of this never got implemented by anyone in any operating system.
    • egberts12 days ago
      CT is a temporary glue until the keyless Signal protocol takes over TLS.
      • lxgr2 days ago
        What protocol are you referring to?
        • egberts12 days ago
          Signal protocol? Or TLS/SSL?
          • lxgr2 days ago
            Yeah, which Signal protocol are you referring to as a potential alternative to TLS?
            • egberts1a day ago
              keyless Signal protocol, as I said.
      • tptacek2 days ago
        Before I log into Citibank I can just verify their safety number out of band.
  • permo-w2 days ago
    I’m not sure I see the point of these certificates in the first place. you can get literally any website certified in 5 minutes
    • sybercecurity2 days ago
      True - the purpose was to authenticate, not just encrypt. Years ago, every CA had additional tiers of authentication for certificates that included all sorts of ID checks, corporate records, etc. The idea was that businesses would pay extra for a certificate that guaranteed they were the legitimate brand. However, users couldn't easily differentiate between these levels, so there was no point.

      Now people have come to realize a cert basically ties a service to a domain name and that is basically the best you can do in most cases.

      • ClumsyPilot2 days ago
        > The idea was that businesses would pay extra for a certificate that guaranteed they were the legitimate brand

        I really liked that functionality, it made sense to me

        • arccy2 days ago
          Have you seen the corporate names companies use? they're quite opaque and indistinguishable from similarly sounding ones.
        • richwater2 days ago
          I dont. Putting the decision of who is a "legitimate brand" shouldn't be in the hand of a private company or bureaucratic government. The concept is ripe for discrimination at pretty much every step.
    • bell-cot2 days ago
      This is often cited as the reason (but there are plenty more):

      https://en.wikipedia.org/wiki/Man-in-the-middle_attack

      If the CA's are doing their jobs...then FirstBank.com can get a cert for their web site. But the gang who rooted a bunch of home routers (or hacked a WiFi setup, or whatever) can't.

      If not...then yeah, that's the problem.

  • fsfsdfads3 days ago
    [dead]
  • lxgr3 days ago
    tl;dr: By using the CT logs as a trusted timestamping service, if I understand it correctly?
    • tptacek3 days ago
      More broadly, because longstanding universal adoption of CT makes it possible for root programs to grandfather in older certificates, so that they can axe a CA without generating SSL errors for all that CA's existing certificates, which is something they couldn't do before.
  • inetknght3 days ago
    [flagged]
    • 3 days ago
      undefined