The salt here is deserved! JSON Web Signatures are a gnarly format, and the ACME API is pretty enthusiastic about being RESTful.
It’s not what I’d design. I think a lot of that came via the IETF wanting to use other IETF standards, and a dash of design-by-committee.
A few libraries (for JWS, JSON and HTTP) go a long way to making it more pleasant but those libraries themselves aren’t always that nice, especially in C.
I’m working on an interactive client and accompanying documentation to help here too, because the RFC language is a bit dense and often refers to other documents too.
They are??
As someone who wallows in ASN.1, Kerberos, and PKI, I don't find JWS so "gnarly". Even if you're open-coding a JSON Web Signature it will be easier than to open-code S/MIME, CMS, Kerberos, etc. Can you explain what is so gnarly about JWS?
Mind you, there are problems with JWT. Mainly that HTTP user-agents don't know how to fetch the darned things because there is not standard for how to find out how to fetch the darned things, when you should honor a request for them, etc.
"Somehow, a couple of weeks ago, I found this other site which claimed to be better than LE and which used relatively simple HTTP requests without a bunch of funny data types."
"This is when the fine print finally appeared. This service only lets you mint 90 day certificates on the free tier. Also, you can only do three of them. Then you're done. 270 days for one domain or 3 domains for 90 days, and then you're screwed. Isn't that great? "
She don't mention what this "other site" is.
Oh JSON.
For those unfamiliar with the reason here, it’s that JSON parsers cannot be relied upon to treat numbers properly. Is 4723476276172647362476274672164762476438 a valid JSON number? Yes, of course it is. What will a JSON parser due with it? Silently truncate it to a 64-bit or 63-bit integer, or a float, probably or if you’re very lucky emit an error (a good JSON decoder written in a sane language like Common Lisp would of course just return the number, but few of us are so lucky).
So the only way to reliably get large integers into and out of JSON is to encode them as something else. Base64-encoded big-endian bytes is not a terrible choice. Silently doing the wrong thing is the root of many security errors, so it not wrong to treat every number in the protocol this way. Of course, then one loses the readability of JSON.
JSON is better than XML, but it really isn’t great. Canonical S-expressions would have been far preferable, but for whatever reason the world didn’t go that way.
I feel like not understanding why JSON won out is being intentionally obtuse. JSON can easily be hand written, edited, and read for most data. Canonical S-expressions are not as easy to read and much harder to write by hand; having to prefix every atom with a length makes is very tedious to write by hand. If you have a JSON object you want to hand edit, you can just type... for an Canonical S-expression, you have to count how many characters you are typing/deleting, and then update the prefix.
You might not think the ability to hand generate, read, and edit is important, but I am pretty sure that is a big reason JSON has won in the end.
Oh, and the Ruby JSON parser handles that large number just fine.
I didn’t feel like my comment was the right place to shill for an alternative, but rather to complain about JSON. But since you raise it.
> JSON can easily be hand written, edited, and read for most data.
So can canonical S-expressions!
> Canonical S-expressions are not as easy to read and much harder to write by hand; having to prefix every atom with a length makes is very tedious to write by hand.
Which is why the advanced representation exists. I contend that this:
(urn:ietf:params:acme:error:malformed
(detail "Some of the identifiers requested were rejected")
(subproblems ((urn:ietf:params:acme:error:malformed
(detail "Invalid underscore in DNS name \"_example.org\"")
(identifier (dns _example.org)))
(urn:ietf:params:acme:error:rejectedIdentifier
(detail "This CA will not issue for \"example.net\"")
(identifier (dns example.net))))))
is far easier to read than this (the first JSON in RFC 8555): {
"type": "urn:ietf:params:acme:error:malformed",
"detail": "Some of the identifiers requested were rejected",
"subproblems": [
{
"type": "urn:ietf:params:acme:error:malformed",
"detail": "Invalid underscore in DNS name \"_example.org\"",
"identifier": {
"type": "dns",
"value": "_example.org"
}
},
{
"type": "urn:ietf:params:acme:error:rejectedIdentifier",
"detail": "This CA will not issue for \"example.net\"",
"identifier": {
"type": "dns",
"value": "example.net"
}
}
]
}
> for an Canonical S-expression, you have to count how many characters you are typing/deleting, and then update the prefix.As you can see, no you do not.
But, I mean, they're basically isomorphic with like 2 things exchanges ({} and [] instead of (); implicit vs explicit keys/types).
json.Number is (almost) my “favorite” arbitrary decimal: https://github.com/ncruces/decimal?tab=readme-ov-file#decima...
I'm half joking, but I'm not sure why S-expressions would be better here. There are LISPs that don't do arbitrary precision math.
For RSA-4096, the modulus is 4096 bits = 512 bytes in binary, which (for my test key) is 684 characters in base64 or 1233 characters in decimal. So the base64 version is much smaller.
Base64 is also more efficient to deal with. An RSA implementation will typically work with the numbers in binary form, so for the base64 encoding you just need to convert the bytes, which is a simple O(n) transformation. Converting the number between binary and decimal, on the other hand, is O(n^2) if done naively, or O(some complicated expression bigger than n log n) if done optimally.
Besides computational complexity, there's also implementation complexity. Base conversion is an algorithm that you normally don't have to implement as part of an RSA implementation. You might argue that it's not hard to find some library to do base conversion for you. Some programming languages even have built-in bigint types. But you typically want to avoid using general-purpose bigint implementations for cryptography. You want to stick to cryptographic libraries, which typically aim to make all operations constant-time to avoid timing side channels. Indeed, the apparent ease-of-use of decimal would arguably be a bad thing since it would encourage implementors to just use a standard bigint type to carry the values around.
You could argue that the same concern applies to base64, but it should be relatively safe to use a naive implementation of base64, since it's going to be a straightforward linear scan over the bytes with less room for timing side channels (though not none).
Now, S-expressions as used for programming languages such as Lisp do have numbers, but again Lisp has bignums. As for parsers of Lisp S-expressions written in other languages: if they want to comply with the standard, they need to support bignums.
I'd be happy to use s-expressions instead :) Though to GP's point, I suppose we might then end up with JS s-expression parsers that still treat ints and floats interchangeably.
Python 3.13.3 (main, May 21 2025, 07:49:52) [GCC 14.2.0] on linux
Type "help", "copyright", "credits" or "license" for more
information.
>>> import json
>>>
json.loads('47234762761726473624762746721647624764380000000000000000000000000000000000000000000')
47234762761726473624762746721647624764380000000000000000000000000000000000000000000
>> import json, decimal
>> j = "47234762761726473624762746721647624764380000000000000000000000000000000000000000000"
>> json.loads(j, parse_float=decimal.Decimal, parse_int=decimal.Decimal)
Decimal('47234762761726473624762746721647624764380000000000000000000000000000000000000000000')
This way you avoid this problem: >> import json
>> j = "0.47234762761726473624762746721647624764380000000000000000000000000000000000000000000"
>> json.loads(j)
0.47234762761726473
And instead can get: >> import json, decimal
>> j = "0.47234762761726473624762746721647624764380000000000000000000000000000000000000000000"
>> json.loads(j, parse_float=decimal.Decimal, parse_int=decimal.Decimal)
Decimal('0.47234762761726473624762746721647624764380000000000000000000000000000000000000000000')
But yea, as a Clojure guy sexprs or EDN would be much better.
It's a shame JSON parsers usually default to performance rather than correctness, by using bignums for numbers.
That sentence has four negations and I honestly can't figure out what it means.
> This specification allows implementations to set limits on the range and precision of numbers accepted
JSON is a terrible interoperability standard.
Converting that text to _any_ kind of numerical value is outside the scope of the specification. (At least the JSON.org specification, the RFC tries to say more.)
As a textural format, when you use it for data interchange between different platforms, you should ensure that the endpoints agree on the _interpretation_, otherwise they won't see the same data.
Again outside of the scope of the JSON specification.
I also wrote up a digested description of the issuance flow here: https://www.arnavion.dev/blog/2019-06-01-how-does-acme-v2-wo... It's not a replacement for reading the RFCs, but it presents the information in the sequence that you would follow for issuance, so think of it like an index to the RFC sections.
(6.858 is the old name of the class, it was renamed to 6.5660 recently.)
I know it isn't a skill issue because of who the author is. So I can only imagine it is some sort of personal opinion that they dislike ACME as a concept or the tooling around ACME in general.
We've been using LE for a while (since 2019 I think) for handful of sites, and the best nonsense client _for us_ was https://github.com/do-know/Crypt-LE/releases.
Then this year we've done another piece of work this time against the Sectigo ACME server and le64 wasn't quite good enough.
So we ended up trying:-
- https://github.com/certbot/certbot on GitHub Actions, it was fine but didn't quite like the locked down environment
- https://github.com/go-acme/lego huge binary, cli was interestingly designed and the maintainer was quite rude when raising an issue
- https://github.com/rmbolger/Posh-ACME our favourite, but we ended up going with certbot on GHA once we fixed the weird issues around permissions
Edit* Re-read it. The tone isn't aimed at the ACME or the clients. It's the spec itself. ACME idea good, ACME implementation bad.
> ACME idea good, ACME implementation bad.
Maybe I'm misreading but it sounds like you're on a similar page to the author.
As they said at the top of the article:
> Many of the existing clients are also scary code, and I was not about to run any of them on my machines. They haven't earned the right to run with privileges for my private keys and/or ability to frob the web server (as root!) with their careless ways.
This might seem harsh but when I think it's a pretty fair perspective to have when running security-sensitive processes.
To implement that many clients run as a root. Even if that root is in a docket container, this is needlessly elevated privileges especially given the complexity (again, needless) of many clients.
The sad part is that it is trivial to run most of the clients with an account with no privileges that can access very few files and use a unix socket to tell the web server to reload the certificate. But this is not done.
And then ideally at this point the web servers should if not implement then at least facilitate ACME protocol implementations, like, for example, redirect traffic requests from acme servers to another port with one-liner in config. But this is not the case.
Which I do understand. Although I use Docker, I mainly use it personally for things I don’t want to spend much time on. I don’t really like it over other alternatives, but it makes standing up a lab service stupidly easy.
I run acme in a non privileged jail whose file system I can access from outside the jail.
So acme sees and accesses nothing and I can pluck results out with Unix primitives from the outside.
Yes, I use dns mode. Yes, my dns server is also a (different) jail.
Whether it's a local binary or a dockerised one, that access still needs to be marshalled either way & it can get complex facilitating that with a docker container. I haven't found it too bad but I'd really rather not need docker for on-demand automations.
I give plenty* of services root access to my system, most of which I haven't written myself & I certainly haven't audited their code line-by-line, but I agree with the author that you do get a sense from experience of the overall hygiene of a project & an ACME client has yet to give me good vibes.
* within reason
She doesn't "trust" tooling that basically the entire Internet including major security-conscious organizations are using, essentially letting perfect get in the way of good.
I think if she were a less capable engineer she would just set that shit up using the easiest way possible and forget about it like everyone else, and nothing bad would happen. Download nginx proxy manager, click click click, boom I have a wilcard cert, who cares?
I mean, this is her https site, which seems to just be a blog? What type of risk is she mitigating here?
Essentially the author is so skilled that she's letting perfect get in the way of good.
I haven't thought about certificates for years because it's not worth my time. I don't really care about the tooling, it's not my problem, and it's never caused a security issue. Put your shit behind a load balancer and you don't even need to run any ACME software on your own server.
The older posts on the same website provided a bit more context for me to understand today's post better:
- "Why I still have an old-school cert on my https site" - January 3, 2023 - https://rachelbythebay.com/w/2023/01/03/ssl/
- "Another look at the steps for issuing a cert" - January 4, 2023 - https://rachelbythebay.com/w/2023/01/04/cert/
Sadly, security is a cat and mouse game, which means it's always evolving and you're forced to keep up - and it's inherent by the nature of the field, so we can't really blame anyone (unlike, say, being forced to integrate with the latest Google services to be allowed on the Play Store). At least you get to write your own ACME client if you want to. You don't have to use certbot, and there's no TPM-like behaviour locking you out of your own stuff.
Browser vendors at some point claimed it confused users and removed the highlight (I think the same browser vendors who try to remove the "confusing" URL bar ...)
Aside from that EV certificates are slow to issue and phishers got similar enough EV certs making the whole thing moot.
This is really important to understand if you care about either: Actually engineering security at some scale or knowing what's actually going on in order to model it properly in your head.
If you just want to make a web site so you can put up a blog about your new kitten, any of the tools is fine, you don't care, click click click, done.
For somebody like Rachel or many HN readers, knowing enough of the technology to understand that the ACME client needn't run on your web servers is crucial. It also means you know that when some particular client you're evaluating needs to run on the web server that it's a limitation of that client not of the protocol - birds can't all fly, but flying is totally one of the options for birds, we should try an eagle not an emu if we want flying.
It's not just about not understanding, it's that more complex stuff is inherently more prone to security vulnerabilities, however well you think you reviewed its code.
That's overly simplifying it and ignores the part where the simple stuff is not secure to begin with.
In the current context you could take a HTTP client with a formally verified TLS stack, would you really say it's inherently more vulnerable than a barebones HTTP client talking to a server over an unencrypted connection? I'd say there's a lot more exposed in that barebones client.
Of course plain http would be, generally, much more dangerous than a however complex encrypted connection
Honest question:
* Do you understand OS syscalls in detail?
* Do you understand how your BIOS initializes your hardware?
* Do you understand how modern filesystems work?
* Do you understand the finer details of HTTP or TCP?
Because... I don't. But I know enough about them that I'm quite convinced each of them is a lot more difficult to understand than ACME. And all of them and a lot more stuff are required if you want to run a web server.
Each extra bit of software is an additional attack surface after all
If you're a fan of left-pad I won't judge but don't expect me to partake without bitter complaints.
Perhaps the author wasn't looking hard enough. It could probably be ported with little effort.
This client really wants the easy case where the client lives on the machine which owns the name and is running the web server, and then it uses OpenBSD-specific partitioning so that elements of the client can't easily taint one another if they're defective
But, the ACME protocol would allow actual air gapping - the protocol doesn't care whether the machine which needs a certificate, the machine running an ACME client, and the machine controlling the name are three separate machines, that's fine, which means if we do not use this OpenBSD all-in-one client we can have a web server which literally doesn't do ACME at all, an ACME client machine which has no permission to serve web pages or anything like that, and name servers which also know nothing about ACME and yet the whole system works.
That's more effort than "I just install OpenBSD" but it's how this was designed to deliver security rather than putting all our trust in OpenBSD to be bug-free.
Most software in the OpenBSD base system lacks features on purpose. Their dev team frequently rejects patches and feature requests without compelling reasons to exist. Less features means less places for things to go wrong means less chance of security bugs.
It exists so their simple webserver (also in the base system) has ACME support working out of the box. No third party software to install, no bullshit to configure, everything just works as part of a super compact OS. Which to this day still fits on a single CD-ROM.
Most of all no stupid Rust compiler needed so it works on i386 (Rust cannot self-host on i386 because it's so bloated it runs out of memory, which is why Rust tools are not included in i386).
If your needs exceed this or you adore complexity then feel free to look elsewhere.
I know ACME alone is not insurmountably complex, but it is another brick in the wall.
Kind of makes me wonder what kind of stack her website is running on that something like a lightweight ACME library (https://github.com/jmccl/acme-lw comes to mind, but there's a C++ library for ESP32s that should be even more lightweight) loading in the certificates isn't doing the job.
The problem is, SSL is a fucking hot, ossified mess. Many of the noted core issues, especially the weirdnesses around encoding and bitfields, are due to historical baggage of ASN.1/X.509. It's not fun to deal with it, at all... the math alone is bad enough, but the old abstractions to store all the various things for the math are simply constrained by the technological capabilities of the late '80s.
There would have been a chance to at least partially reduce the mess with the introduction of LetsEncrypt - basically, have the protocol transmit all of the required math values in a decent form and get an x.509 cert back - and HTTP/2, but that wasn't done because it would have required redeveloping a bunch of stuff from scratch whereas one can build an ACME CA with, essentially, a few lines of shell script, OpenSSL and six crates of high proof alcohol to drink away one's frustrations of dealing with OpenSSL, and integrate this with all software and libraries that exist there.
ASN.1 and X509 aren't all that bad. It's a comprehensively documented binary format that's efficient and used everywhere, even if it's hidden away in binary protocols you don't look at every day.
Unlike what most people seem to think, ACME isn't something invented just for Let's Encrypt. Let's Encrypt was certainly the first high-profile CA to implement the protocol, but various CAs (free and paid) have their own ACME servers and have had them for ages now. It's a generic protocol for certificate authorities to securely do domain validation and certificate provisioning that Let's Encrypt implemented first.
The unnecessarily complex parts of the protocol when writing a from-the-ground-up client are complex because ACME didn't reinvent the wheel, and reused existing standard protocols instead. Unfortunately, that means having to deal with JWS, but on the other hand, it means most people don't need to write their own ACME-JWS-replacement-protocol parsers. All the other parts are complex because the problem ACME is solving is actually quite complex.
The author wrote [another post](https://rachelbythebay.com/w/2023/01/03/ssl/) about the time they fell for the lies of a CA that promised an "easier" solution. That solution is pretty much ACME, but with more manual steps (like registering an account, entering domain names).
I personally think that for this (and for many other protocols, to be honest) XML would've been a better fit as its parsers are more resilient against weird data, but these days talking about XML will make people look at you like you're proposing COBOL. Hell, I even exchanging raw, binary ASN.1 messages would probably have gone over pretty well, as you need ASN.1 to generate the CSR and request the certificate anyway. But, people chose "modern" JSON instead, so now we're base64 encoding values that JSON parsers will inevitably fuck up instead.
For instance, Whatsapp can not open HTTP links anymore.
TLS isn't for you, it's for your readers.
I see no other reason to serve content over HTTPS.
The reason you don't see many MITM boxes injecting content into HTTP anymore is because of widespread HTTPS adoption and browsers taking steps to distrust HTTP, making MITM injection a near-useless tactic.
(This rhymes with the observation that some people now perceive Y2K as overhyped fear-mongering that amounted to nothing, without understanding that immense work happened behind the scenes to avert problems.)
And can generally be configured by the user not to downgrade to http without an explicit prompt.
Honestly I disagree with the refusal to support various APIs over http. Making the (configurable last I checked) prompt mandatory per browser session would have sufficed to push all mainstream sites to strictly https.
Absolutely, and this works quite well on the current web.
> Honestly I disagree with the refusal to support various APIs over http.
There are multiple good reasons to do so. Part of it is pushing people to HTTPS; part of it is the observation that if you allow an API over HTTP, you're allowing that API to any attacker.
In the scenario I described you're doing that only after the user has explicitly opted in on a case by case basis, and you're forcing a per-session nag on them in order to coerce mainstream website operators to adopt the secure default.
At that point it's functionally slightly more obtuse than adding an exception for a certificate (because those are persistent). Rejecting the latter on the basis of security is adopting a position that no amount of user discretion is acceptable. At least personally I'm comfortable disagreeing with that.
More generally, I support secure defaults but almost invariably disagree with disallowing users to shoot themselves in the foot. As an example, I expect a stern warning if I attempt to uninstall my kernel but I also expect the software on my device to do exactly what I tell it to 100% of the time regardless of what the developers might have thought was best for me.
I agree with this. But also, there is a strong degree to which users will go track down ways (or follow random instructions) to shoot themselves in the foot if some site they care about says "do this so we can function!". I do think, in cases where there's value in collectively pushing for better defaults, it's sometimes OK for the "I can always make my device do exactly what I tell it to do" escape hatch to be "download the source and change it yourself". Not every escape hatch gets a setting, because not every escape hatch is supported.
If I'm really curious about your plain http site I'll check it out through archive.org, and I'm definitely not going to keep visiting it frequently.
It's been easy to live with forced https for at least five years (and for at least the last ten with https first, with confirmations for plain http).
This was my experience sending a link to someone who primarily uses an iPad and is non-technical. They were not going to find/open their Macbook to see the link.
I'll give you my 8600 when you pry it from my cold, dead LAN.
Edit: to be clear, I'd not be too surprised if their homegrown client survives an audit unscathed, I'm sure they're a great coder, but the odds just don't seem better than to the alternative of using an existing client that was already audited by professionals as well as other people
The steps described in the article sound familiar to the process done in the early 2000's, but I'm not sure why you'd want to make it hard for yourself now.
I use certbot with "--preferred-challenges dns-01" and "--manual-auth-hook" / "--manual-cleanup-hook" to dynamically create DNS records, rather than needing to modify the webserver config (and the security/access risks that comes with). It just needs putting the cert/key in the right place and reloading the webserver/loadbalancer.
What registrar do people recommend in 2025?
Don’t look to large, well-known registrars. I would suggest that you look for local registrars in your area. The TLD registry for your country/area usually has a list of the authorized registrars, so you can simply search that for entities with a local address.
Disclaimer: I work at such a small registrar, but you are probably not in our target market.
I have built a registrar in the past and have a lot of arcane knowledge about how they work. Just need to figure out a way to monetize!
Must be other good ones? Somewhat prefer something in the UK (but have been using Gandi so its not essential).
INWX in Germany also seems well regarded but I haven't used them.
Does anyone know why they're there?
There are private keys and hash functions involved. But base64url and json aren't the worst web crimes to have been inflicted upon us. It's not _that_ bad, is it?
But "the rest" of ACME also include X.509 certificates and PKCS#10 Certificate Signing Requests, which are in turn based on ASN.1 (you're fortunate enough you only need DER encoding) and RSA parameters. ASN.1 and X.509 are devilishly complex if you don't let openssl do everything for you and even if you do. The first few paragraphs are all about making the correct CSR and dealing with RSA, and encoding bigints the right way (which is slightly different between DER and JWK to make things more fun).
Besides that I don't know much about the ACME spec, but the post mentions a couple of other things :
So far, we have (at least): RSA keys, SHA256 digests, RSA signing, base64 but not really base64, string concatenation, JSON inside JSON, Location headers used as identities instead of a target with a 301 response, HEAD requests to get a single value buried as a header, making one request (nonce) to make ANY OTHER request, and there's more to come.
This does sound quite complex. I'm just not sure how much simpler ACME could be. Overturning the clusterfuck that is ASN.1, X.509 and the various PKCS#* standards has been a lost cause for decades now. JOSE is something I would rather do without, but if you're writing an IETF RFC, you're only other option is CMS[1], which is even worse. You can try to offer a new signature format, but that would be shut down for being "simpler and cleaner than JOSE, but JOSE just has some warts that need to be fixed or avoided"[2].
I think the things you're left with that could have been simplified and accepted as a standard are the APIs themselves, like getting a nonce with a HEAD request and storing identifiers in a Location header. Perhaps you could have removed signatures (and then JOSE) completely and rely on client IDs and secrets since we're already running over TLS, but I'm not familiar enough with the protocol to know what would be the impact. If you really didn't need any PKI for the protocol itself here, then this is a magnificent edifice of overengineering indeed.
[1] https://datatracker.ietf.org/doc/html/rfc5652 [2] https://mailarchive.ietf.org/arch/msg/cfrg/4YQH6Yj3c92VUxqo-...
Most of it is unused though, only CN, SANs and public key are used.
Issuer: We need to know who issued this cert, then we can check whether we trust them and whether the signature on the certificate is indeed from them, and potentially repeat this process - this cert was issued by Let's Encrypt's E5 intermediate
Validity: We need to know when this cert was or will be valid, a perfectly good certificate for 2019 ain't much good now, this one is valid from early May until early August
Now we get a public key, in this case a nice modern elliptic curve P-256 key
We need to know how the signature works, in this case it's ECDSA with SHA-384
And we need a serial number for the certificate, this unique number helps sidestep some nasty problems and also gives us an easy shorthand if reporting problems, 05:6B:9D:B0:A1:AE:BB:6D:CA:0B:1A:F0:61:FF:B5:68:4F:5A will never be any other cert only this one.
We get a mandatory notice that this particular certificate is NOT a CA certificate, it's just for a web server, and we get the "Extended key use" which says it's for either servers or for clients (Let's Encrypt intends to cease offering "for client" certificates in the next year or so, today they're the default)
Then we get a URL for the CRL where you can find out if this certificate (or others like it) were revoked since issuance, info with a URL for OCSP (also going away soon) and a URL where you can get your own copy of the issuer's certificate if you somehow do not have that.
We get a policy OID, this is effectively a complicated way to say "If you check Let's Encrypt's formal policy documents, this certificate was specifically issued under the policy identified with this OID", these do change but not often.
Finally we get two embedded SCTs, these are proof that two named Certificate Transparency Log services have seen this certificate, or rather, the raw data in the certificate, although they might also have the actual certificate.
So, quite a lot more than you listed.
[A correct decoder also needs to actually verify the signature, I did not list that part, obviously ignoring the signature would be a bad idea for a live system as then anybody can lie about anything]
The spec (well, the RFC anyway) is indeed classically RFC-ish, but the same applies to HTTP or TCP/IP, and I haven't seen the same sort of complaints about those. Maybe it's just resistance to change? Most of the specs (JOSE, ACME etc) aren't really complex for the sake of complexity, but solve problems that aren't simple problems to solve simply in a simple fashion. I don't think that's bad at all, it's mostly indicative of the complexity of the problem we're solving.
Some examples of gratuitous complexity:
1. Supporting too many goddamn algorithms. Keeping RSA and HMAC-SHA256 for leagcy-compatible stuff, and Ed25519 for XChaChaPoly1305 for regular use would have been better. Instead we support both RSA with PKCS#1 v1.5 signatures and RSA-PSS with MGF1, as well as ECDH with every possible curve in theory (in practice only 3 NIST Prime curves).
2. Plethora of ways to combine JWE and JWS. You can encrypt-then-sign or sign-then-encrypt. You can even create multiple layers of nesting.
3. Different "typ"s in the header.
4. RSA JWKs can specify the d, p, q, dq, dp and qi values of the RSA private key, even though everything can be derived from "p" and "q" (and the public modulus and exponent "n" and "e").
5. JWE supports almost every combination of key encryption algorithm, content encryption algorithm and compression algorithm. To make things interesting, almost all of the options are insecure to a certain degree, but if you're not an expert you wouldn't know that.
6. Oh, and JWE supports password-based key derivation for encryption.
7. On the other, JWS is smarter. It doesn't need this fancy shmancy password-based key derivation thingamajig! Instead, you can just use HMAC-SHA256 with any key length you want. So if you fancy encrypting your tokens with a cool password like "secret007" and feel like you're a cool guy with sunglasses in a 1990s movie, just go ahead!
This is just some of the things of the top of my head. JOSE is bonkers. It's a monument to misguided overengineering. But the saddest thing about JOSE is that it's still much simpler than the standards which predated it: PKCS#7/CMS, S/MIME and the worst of all - XMLDSig.
Take your argument about order of operations or algorithms. Just because you might not need to do it in an alternate order or use a legacy (and broken) algorithm doesn't mean nobody else does. Keep in mind that this standard isn't exactly new, and isn't only used in startups in San Francisco. There are tons of systems that use it that might only get updated a handful of times each year. Or long-lived JWTs that need to be supported for 5 years. Not going to replace hardware that is out on a pole somewhere just because someone thought the RFC was too complicated.
Out of your arguments, none of them require you to do it that way. Example: you don't have to supply d, dq, dp or qi if you don't want to. But if you communicate with some embedded device that will run out of solar power before it can derive them from the RSA primitives, you will definitely help it by just supplying it on the big beefy hardware that doesn't have that problem. It allows you to move energy and compute cost wherever it works best for the use case.
Even simpler: if you use a library where you can specify a RSA Key and a static ID, you don't have to think about any of this; it will do all of it for you and you wouldn't even know about the RFC anyway.
The only reason someone would need to know the details is if you don't use a library or if you are the one writing it.
Subject Alternative Name (SAN) is not an alternative in the sense that it's an alias, SANs exist because the X.509 certificate standard is, as its name might suggest, intended for the X.500 directory system, a system from the 20th century which was never actually deployed. Mozilla (back then the Netscape Corporation) didn't like re-inventing wheels and this standard for certificates already existed so they used it in their new "Secure Sockets" technology but it has no Internet names so at first they just put names in plain text. However, X.500 was intended to be infinitely extensible, so we can just invent an alternative naming scheme, and that's what the SANs are, which is why they're mandatory for certificates in the Web PKI today - these are the Internet's names for things, so they're mandatory when talking about the Internet, they're described in detail in PKIX, the IETF document standardising the use of X.500 for the Internet.
There are several types of name we can express as SANs but in a certificate the two you'll commonly see are dnsName - the same ASCII names you'd see in URLs like "news.ycombinator.com" or "www.google.com" and ipAddress - a 32-bit integer typically spelled as four dotted decimals 10.20.30.40 [yes or an IPv6 128-bit integer will work here, don't worry]
Because the SANs aren't just free text a machine can reliably parse them which would doubtless meet Rachel's approval. The browser can mindlessly compare the bytes in the certificate "news.ycombinator.com" with the bytes in the actual DNS name it looked up "news.ycombinator.com" and those match so this cert is for this site.
With free text in a CN field like a 1990s SSL certificate (or, sadly, many certificates well into the 2010s because it was difficult to get issuers to comply properly with the rules and stop spewing nonsense into CN) it's entirely possible to see a certificate for " 10.200.300.400" which well, what's that for? Is that leading space significant? Is that an IP address? But those numbers don't even fit in one byte each I hope our parser copes!
You can’t mindlessly compare the bytes of the host name: you have to know that it’s the presentation format of the name, not the DNS wire format; you have to deal with ASCII case insensitivity; you have to guess what to do about trailing dots (because that isn’t specified); you have to deal with wildcards (being careful to note that PKIX wildcard matching is different from DNS wildcard matching).
It’s not as easy as it should be!
The names PKIX writes into dnsName are exactly the same as the hostnames in DNS. They are defined to always be Fully Qualified, and yet not to have a trailing dot, you don't have to like that but it's specified and it's exactly how the web browsers worked already 25+ years ago.
You're correct that they're not the on-wire label-by-label DNS structure, but they are the canonical human readable DNS name, specifically the Punycode encoded name, so [the website] https://xn--j1ay.xn--p1ai/ the Russian registry which most browsers will display with Cyrllic, has its names stored in certificates the same way as it is handled in DNS, as Punycode "xn--j1ay.xn--p1ai". In software I've seen the label-by-label encoding stuff tends to live deep inside DNS-specific code, but the DNS name needed for comparing with a certificate does not do this.
You don't need to "deal with" case except in the sense that you ignore it, DNS doesn't handle case, the dnsName in SANs explicitly doesn't carry this, so just ignore the case bits. Your DNS client will do the case bit wiggling entropy hack, but that's not in code the certificate checking will care about.
You do need to care about wildcards, but we eliminated the last very weird certificate wildcards because they were minted only by a single CA (which argued by their reading they were obeying PKIX) and that CA is no longer in business 'cos it turns out some of the stupid things they were doing even a creative lawyerly reading of specifications couldn't justify. So the only use actually enabled today is replacing one DNS label at the front of the name. Nothing else is used, no suffixes, no mid-label stuff, no multi-label wildcards, no labels other than the first.
Edited to better explain the IDN situation hopefully
The DNS is case-insensitive, though only for ASCII. So you have to compare names case-insensitively (again, for ASCII). It _is_ possible to have DNS servers return non-lowercase names! E.g., way back when sun.com's DNS servers would return Sun.COM if I remember correctly. So you do have to be careful about this, though if you do a case-sensitive, memcmp()-like comparison, 999 times out of 1,000 everything will work, and you won't fail open when it doesn't.
Yes, all the popular browsers require this.
> they certainly didn't even as of ~10 years ago?
That's true, ten years ago it was likely that if a browser required this they would see unacceptably high failure rates because CAs were non-compliant and enforcement wasn't good enough. Issuing certs which would fail PKIX was prohibited, but so is speeding and yet people do that every day. CT improved our ability to inspect what was being issued and monitor fixes.
> Yes, it is "required", but CN only has worked for quite some time.
No trusted CA will issue "CN only" for many years now, if you could obtain such a certificate you'd find it won't work in any popular browser either. You can read the Chromium or Mozilla source and there just isn't any code to look in CN, the browser just parses the SANs.
> I find this tricks up some IT admins who are still used to only supplying a CN and don't know what a SAN is.
In most cases this is a sign you're using something crap like openssl's command line to make CSRs, and so you're probably expending a lot of effort filling out values which will be ignored by the CA and yet not offered parameters you did need.
As you noted about OpenSSL, Windows CertSvr will allow you to do CN only, too.
Chromium published an "intent to remove" and then actually removed the CN parsing in 2017, at that point EnableCommonNameFallbackForLocalAnchors was available for people who were still catching up to policy from ~15 years ago. The policy override flag was removed in 2018, after people had long enough to fix their shit.
Mozilla had already made an equivalent change before that, maybe it worked for a few more years in Safari? I don't have a Mac so no idea.
Why I still have an old-school cert on my HTTPS site - https://news.ycombinator.com/item?id=34242028 - Jan 2023 (63 comments)
Part of not wanting to let go is the sunk cost fallacy. Part of it is being suspicious of being (more) dependent on someone else (than you are already dependent on a different someone else).
(As an aside, the n-gate guy who ranted against HTTPS in general and thought static content should just be HTTP also thought like that. Unfortunately, as I'm at a sketchy cafe using their wifi, his page currently says I should click here to enter my bank details, and I should download new cursors, and oddly doesn't include any of his own content at all. Bit weird, but of course I can trust he didn't modify his page, and it's just a silly unnecessary imposition on him that I would like him to use HTTPS)
Unfortunately for those rugged individuals, you're in a worldwide community of people who want themselves, and you, to be dependent on someone else. We're still going with "trust the CAs" as our security model. But with certificate transparency and mandatory stapling from multiple verifiers, we're going with "trust but verify the CAs".
Maximum acceptable durations for certificates are coming down, down, down. You have to get new ones sooner, sooner, sooner. This is to limit the harm a rogue CA or a naive mis-issuing CA can do, as CRLs just don't work.
The only way that can happen is with automation, and being required to prove you still own a domain and/or a web-server on that domain, to a CA, on a regular basis. No "deal with this once a year" anymore. That's gone and it's not coming back.
It's good to know the whole protocol, and yes certbot can be overbearing, but Debian's python3-certbot + python3-certbot-apache integrates perfectly with how Debian has set up apache2. It shouldn't be a hardship.
And if you don't like certbot, there are lots of other ACME clients.
And if you don't like Let's Encrypt, there are other entities offering certificates via the ACME protocol (YMMV, do you trust them enough to vouch for you?)
Yep, I've seen that argument so many times and it should never make sense to anyone that understands MITM.
The only way it could possibly work is if the static content were signed somehow, but then you need another protocol the browser and you need a way to exchange keys securely, for example like signed RPMs. It would be less expensive as the encryption happens once, but is it worth having yet another implementation?
Rather, it's that most people simply don't need to care about MITM. It's not a relevant attack for most content that can be reasonably served over HTTP. The goal isn't to eliminate every security threat possible, it's to eliminate the ones that are actually a problem for your use case.
There's no such thing as "not worth the effort to secure" because neither the site itself nor its content matters, only the network path from the site to the user, which is not under the full control of either party. These need not be, and usually aren't, targeted attacks; they'll hit anything that can be intercepted and modified, without a care for what it's meant to be, where it's coming from, or who it's going to.
Viewing it is an A-to-B interaction where A is a good-natured blogger and B is a tech-savvy reader, and that's all there is to it, is archaic and naive to the point of being dangerous. It is really an A-to-Z interaction where even if A is a good-natured blogger and Z is a tech-savvy user, parties B through Y all get to have a crack at changing the content. Plain HTTP is a protocol for a high-trust environment and the Internet has not been such a place for a very long time. It is unfortunate that party A (the site) must bear the brunt of the security burden, but that's the state of things today. There were other ways to solve this problem but they didn't get widespread adoption.
The browser content model knows nothing if the data it's receiving is static or not.
ISPs had already shown time and again they'd inject content into http streams for their own profit. BGP attacks routed traffic off to random places. Simply put the modern web should be zero trust at all.
If you want to ensure the bits that were sent from the server to your browser they must be signed in some method.
Maybe you could have a mixed use case page in the browser where you had your secure context, then a sub context of unencrypted protected objects, that could possibly increase caching. With that said, looks like another fun hole browser makers would be chasing every year or so.
https://datatracker.ietf.org/doc/html/rfc8555#section-8.4
>2. Query for TXT *records* for the validation domain name
>3. Verify that the contents of *one of the TXT records* match the digest value
(Emphasis mine.)
- Is female [TIL the term "wogrammer"]
- Works for Facebook [formerly Rackspace and Google] so an undeniably Big MAMAA
- Has been blogging prolifically for at least 14 years [let's call it 40 years: she admin'd a BBS at age 12]
- Website is custom self-hosted; very old school and accessible; no ads or popup bullshit
- Probably has more CSE/SWE experience+talent in her little pinky finger than 80% of HN commenters
https://medium.com/wogrammer/rachel-kroll-7944eeb8c692
So I'd say that her position and experience command enough respect that we cannot judge her merely by peeking at a few trifling journal entries.
>I contacted Rachel and she said - and this is my poor paraphrasing from memory - that the IP ban was something she intentionally implemented but I got caught as a false positive
My favourite client is probably https://github.com/acmesh-official/acme.sh
If you use a DNS service provider that supports it, you can use the DNS-01 challenge to get a certificate - that means that you can have the acme.sh running on a completely different server which should help if you're twitchy about running a complex script on it. It's also got the advantage of allowing you to get certificates for internal/non-routable addresses.
Personally, I find that tls-alpn-01 is even nicer than dns-01. You can run a web server (or reverse proxy) that listens to port 443, and nothing else, and have it automatically obtain and renew TLS certificates, with the challenges being sent via TLS ALPN over the same port you're already listening on. Several web servers and reverse proxies have support for it built in, so you just configure your domain name and the email address you want to use for your Let's Encrypt account, and you get working TLS.
Pinned to an old version and looking for a replacement right now.
If anyone else ran into that it's just a matter of adding
--server letsencrypt
I rather liked using ZeroSSL for a long time (perhaps just out of knee-jerk resistance to the “Just drink the Koolaid^W^W^Wuse Let’s Encrypt! C’mon man, everyone’s doing it!” nature of LE usage), but of late ZeroSSL has gotten so unreliable that I’ve rolled my eyes and started swapping things back to LE.
But yeah, can definitely recommend DNS-01 over HTTP-01, since it doesn't involve implicitly messing with your server settings, and makes it much easier to have a single locked server with all the ACME secrets, and then distribute the certs to the open-to-the-internet web servers.
+1 for acme.sh, it's beautiful.
I assume certbot is the client she’s alluding to that misinterprets one of the factors in the protocol as hex vs decimal and somehow things still work, which is incredibly worrisome.
Then I discovered the web-root approach people mention here and it made a huge difference. Now I have the HTTP snippet in my server set to serve up ACME challenges from a static directory and push everything else to HTTPS, and the ACME client just needs write permission to that directory. I can dynamically include that snippet in all of the sites my server handles and be done.
If I really felt like it, I could even write a wrapper function so the ACME client doesn’t even need restart permissions on the web-server (for me, probably too much to bother with, but for someone like Rachel perhaps worthwhile).
letsencrypt renew --non-interactive --post-hook "systemctl reload nginx"
I run it in "webroot" mode on NgINX servers so it's just a matter of including the relevant config file in your HTTP sections (likely before redirecting to HTTPS) so that "/.well-known/acme-challenge/" works correctly. Then when you do run certbot, it can put the challenge file into the webroot and NgINX will automatically serve it. This allows certbot to do its thing without needing to do anything with NgINX.
tiny-acme.py is 200 lines, easy to audit and incorporate parts into your own infrastructure. It works well for the tiny work it does but it does support anything more modern.
It's on my long list of potential side projects, but I don't think I'll ever gey around to it
This makes me wonder what world of development she is in. Does she prefer SOAP?
If the author says they dislike JSON, especially given the tone of this article with respect to nonsensical protocols, I highly doubt they approve of SOAP.
What would you suggest instead given all these cons?
TCP is just a bunch of bytes... You can't process a bunch of bytes without understanding what they are, and that requires signaling information at a different level (ex - in the bytes themselves as a defined protocol like SSH, SCP, HTTP, etc - or some other pre-shared information between server and client [the worst of protocols - custom bullshit]).
Why is this worse than JSON?
"{'protected': {'protected': { 'protected': 'QABE' }}}" is just as custom as 66537 imo. It's easier to reverse engineer than 66537 but that's not less custom.