Nginx introduces native support for ACME protocol - https://news.ycombinator.com/item?id=44889941 - Aug 2025 (298 comments)
I realise of course the inclusion of an ACME client in a product doesn't mean I need to use their implementation, I'm free to keep using my own independant client. But it seems to me adding ACME clients to everything is going to cause those projects more PRs, more baggage to drag forward etc. And confusion for users as now there's multiple places they could/should be generating certificates.
Anyway, grumpy old man rant over. It just seems Zawinski's Law "Every program attempts to expand until it can read mail. Those programs which cannot so expand are replaced by ones which can." can be replaced these days with MuppetMan's law of "Every program attempts to expand until it can issue ACME certificates."
[Unit]
Description=Whatever
[Service]
ExecStart=/usr/local/bin/cantDoHttpSvc -bind 0.0.0.0:1234
[HTTP]
Domain=https://whatever.net
Endpoint=127.1:1234
Yeah this could happen one dayIf we teach systemd socket activation to do TLS handshakes we can completely offload TLS encryption to the kernel (and network devices) and you get all of this for free.
It's actually not a crazy idea in the world of kTLS to centralize TLS handshaking into systems
Linux seems to offer such facilities, too. I never use it to my knowledge, though (might be that some app used it in background?) https://lwn.net/Articles/892216/
(fun to walk down through the trees and the silicon desert of despair, to the land of the ROM, where things can never change)
[0] https://www.freedesktop.org/software/systemd/man/latest/syst...
It essentially creates per-domain units. However, those are timers, not services, because the underlying tool doesn't have long-running daemon, it's designed to run off cron. So I can't depend on them directly, and I also need to add multitude of dropins that will restart or reload services that use certificates (https://github.com/woju/systemd-dehydrated/blob/master/contr...). Coudn't figure out any way that would automate this better.
This project looks neat! I might give it a try. I had never heard of dehydrated, but I don't particularly love certbot, and would certainly be willing to try.
which is basically ideal, no? for all the buy-in that the systemd stapling-svchost.exe-onto-cgroups approach asks of us, at the very least we have sufficiently expressive system to do that sort of thing. where something on the machine has a notion of what wants what from what, and you can issue a command to see whether that dependency is satisfied. like. we are there. good. nice. hopefully ops guys are content to let sleeping dogs lie, right?
...right?
Sounds enterprise.
Also, you people forgot that my proposal is to also fold the http server in, and ideally all the scripting languages and all of npm just in case.
It would be the same for certd. If you configure your system to hold up booting waiting for a cert then that's your choice but there's plenty of ways to have it not.
On a machine where you're only running a webserver I suppose having Nginx do it the ACME renewal makes sense.
On many of the machines I support I also need certificates for other services, too. In many cases I also have to distribute the certificate to multiple machines.
I find it easy to manage and troubleshoot a single application handling the ACME process. I can't imagine having multiple logs to review and monitor would be easier.
Automating this is pure benefit to those that want it, and a non-issue to those who don't — just don't use it.
Now if Jenkins adds acme support then yes I'll say maybe that one is too far.
I think there is also clearly demand: caddy is very well liked and often recommended for hobbyists and I think a huge part of that is the built in certificate management.
The key service here is "TLS termination proxy", so being able to issue certificates automatically was pretty high on the wish list.
"Real-world applications of OpenResty® range from dynamic web portals and web gateways, web application firewalls, web service platforms for mobile apps/advertising/distributed storage/data analytics, to full-fledged dynamic web applications and web sites. The hardware used to run OpenResty® also ranges from very big metals to embedded devices with very limited resources. It is not uncommon for our production users to serve billions of requests daily for millions of active users with just a handful of machines."
Venafi supports ACME protocol so it can be the ACME server like Let’s Encrypt
I am speaking purely on prem non internet connect scenario
triple-negative, too hard to parse
To avoid a splintered/disjoint ecosystem, library code can be reused across many applications.
Maybe if there were OS level features for doing the same thing you could argue the applications should call out to those instead, but at least on Linux that's not really the case. Why should admins need to install and configure a separate application just to get basic functionality working?
I do tend to find that I need multiple services with tls on the same machine, such as a web server and RabbitMQ, or postfix and dovecot. I don't know how having every program have its own acme client would end up working out. That seems like it could be a mess. On the other hand, I have been having trouble getting them all to take updated certificates correctly without me manually restarting services after cert bots cron job does an update.
Lots of apps should support this automatically, with no intervention necessary, and just communicate securely with each other. And ACME is the way to enable that.
Instead, it would make more sense for TLS to be handled centrally by a known and trusted implementation, which proxies the communication with each backend. This is a common architecture we've used for decades. It's flexible, more secure, keeps complexity compartmentalized, and is much easier to manage.
For a bunch of tech-aware people the inability for you all here to modify your software to meet your needs is insane. As a 14 year old I was using the ck patch series to have a better (for me) scheduler in the kernel. Every other teenager could do this shit.
In my 30s I have a low friction set up where each bit of software only does one thing and it's easy for me to replicate. Teenagers can do this too.
Somehow you guys can't do either of these things. I don't get it. Are you stupid? Just don't load the module. Use stunnel. Use certbot. None of these things are disappearing. I much prefer. I much prefer. I much prefer. Christ. Never seen a userbase that moans as much about software (I moan about moaning - different thing) while being unable to do anything about it as HN.
I have moved most of my personal stuff to caddy, but I look forward to testing out the new release for a future project and learning about the differences in the offerings.
Thanks for this!
nginx-module-acme is available there, too, so you don't need to compile it manually.
Why even bother calling out that it's written in "memory safe Rust code" when the code itself is absolutely riddled with unsafe {} everywhere.
It seems to me that it's written in memory unsafe Rust code.
I don't see a way to integrate rust as a plugin into a C codebase without some level of unsafe usage like this.
Right now you're pretty much stuck casting pointers to and from C land if you want to write a native nginx module in Rust. I'm sure it will get better in the future.
Also, unsafe rust is still safer than C.
Unsafe Rust, like unsafe code blocks in any language that offers them, should be kept to the bare minimum, as building blocks.
I highly doubt that, and developers of Rust have confirmed here on HN that when it comes to unsafe code within a codebase, it is not just the unsafe blocks that are affected, the whole codebase is affected by that.
Unsafe rust is definitely safer than normal C. All the unsafe keyword really means is that the compiler cannot verify the behavior of the code it's up to the programmer. This is for cases where 1. the programmer knows more than the compiler 2. we're interacting with hardware or FFI.
When rust developers say unsafe effects the whole codebase what they mean is that UB in unsafe code could break guarantees about the whole program (even the safe parts). Just because something is unsafe dosen't inherently mean it's going to break everything it just needs more care when writing and reviewing just as C and C++ does.
Rust's `unsafe` blocks are great, and a necessary part of the language. The reason they're great is that they allow containing the code which could exhibit UB to a subset of the program, thereby making it easier to find the source of any mistakes. But they don't (and were never intended to) provide any guarantees about what happens if UB is encountered. It's no worse than C or C++'s UB, and having it in `unsafe` blocks means it's easier to notice where it could happen, but when it does happen it's also no better than C or C++'s UB.
The main practical difference is that Rust pushes you away from UB whereas C tends to push you into it; signed integer overflow is default-UB in C, while Rust makes you go out of your way to get UB integer overflow. Furthermore, the general design philosophy of Rust is that you build "safe abstractions" which might require unsafe to implement, but the interface should be impossible to use in a way which doesn't cause any UB. It's definitely questionable how many people actually adhere to those rules--some people are just going to slap the unsafe keyword on things to make the code compile--but it's still a pretty far distance from C, where the language tends to make building abstractions of any kind, let alone safe ones, difficult.
I have had my share of compiling Rust programs, pulling in thousands of dependencies. If people think it is good practice, then well, good for them, but should not sell Rust as a safe language when it encourages such unsafe practices, especially when there are thousands of dependencies and probably all of them have their own unsafe blocks (even this ACME support does), which affect the whole codebase.
I am going to keep using certbot. No reason to switch.
If we add the list of dependencies from the modules this is what we get
anyhow = "1.0.98" base64 = "0.22.1" bytes = "1.10.1" constcat = "0.6.1" futures-channel = "0.3.31" http = "1.3.1" http-body = "1.0.1" http-body-util = "0.1.3" http-serde = "2.1.1" hyper = { version = "1.6.0", features = ["client", "http1"] } libc = "0.2.174" nginx-sys = "0.5.0-beta" ngx = { version = "0.5.0-beta", features = ["async", "serde", "std"] } openssl = { version = "0.10.73", features = ["bindgen"] } openssl-foreign-types = { package = "foreign-types", version = "0.3" } openssl-sys = { version = "0.9.109", features = ["bindgen"] } scopeguard = "1" serde = { version = "1.0.219", features = ["derive"] } serde_json = "1.0.142" siphasher = { version = "1.0.1", default-features = false } thiserror = { version = "2.0.12", default-features = false } zeroize = "1.8.1"
Now vendoring and counting the lines of those we get 2,171,685 lines of rust. Now this includes the vedored packages from cargo vendor so what happens when we take just the dependecies for our OS. Vendoring for just x86 linux chops our line count to 1,220,702 not bad for just removing packages that aren't needed, but still alot. Let's actually see what's taking up all that space.
996K ./regex 1.0M ./libc/src/unix/bsd 1.0M ./serde_json 1.0M ./tokio/src/runtime 1.1M ./bindgen-0.69.5 1.1M ./tokio/tests 1.2M ./bindgen 1.2M ./openssl/src 1.4M ./rustix/src/backend 1.4M ./unicode-width/src 1.4M ./unicode-width/src/tables.rs 1.5M ./libc/src/unix/linux_like/linux 1.5M ./openssl 1.6M ./vcpkg/test-data/no-status 1.6M ./vcpkg/test-data/no-status/installed 1.6M ./vcpkg/test-data/no-status/installed/vcpkg 1.7M ./regex-syntax 1.7M ./regex-syntax/src 1.7M ./syn/src 1.9M ./libc/src/unix/linux_like 1.9M ./vcpkg/test-data/normalized/installed/vcpkg/info 2.0M ./vcpkg/test-data/normalized 2.0M ./vcpkg/test-data/normalized/installed 2.0M ./vcpkg/test-data/normalized/installed/vcpkg 2.2M ./unicode-width 2.4M ./syn 2.6M ./regex-automata/src 2.7M ./rustix/src 2.8M ./rustix 2.9M ./regex-automata 3.6M ./vcpkg/test-data 3.9M ./libc/src/unix 3.9M ./tokio/src 3.9M ./vcpkg 4.5M ./libc/src 4.6M ./libc 5.3M ./tokio 12M ./linux-raw-sys 12M ./linux-raw-sys/src
Coming in at 12MB we have linux raw sys which provides bindings to the linux userspace, a pretty reasonable requirement. LibC and tokio. Since this is async Tokio is a must have and is pretty much bound to rust at this point. This project is extremely well vetted and is used in industry daily.
Removing those we are left with 671,031 lines of rust
Serde is a well known dependecy that allows for marshalling of data types Hyper is the curl of the rust world allowing interaction with the network
I feel like this is an understandable amount of code given the complexity of what it's doing. Of course to some degree I agree with you and often worry about dependencies. I have a whole article on it here.
https://vincents.dev/blog/rust-dependencies-scare-me/?
I think I'd be more satisfied if things get "blessed" by the foundation like rustls is being. This way I know the project is not likely to die, and has the backing of the language as a whole. https://rustfoundation.org/media/rust-foundation-launches-ru...
I think we can stand to write more things on our own (sudo-rs did this) https://www.memorysafety.org/blog/reducing-dependencies-in-s...
But to completely ignore or not interact with the language seems like throwing the baby out with the bathwater to me
I think we just need to push a culture of writing your own code for small things you're pulling in. (of course that just is pulling alot of weight :) )
I just get tired of everyone trying to burn down crates.io as an inherent evil.
I'd expect nginx to have this years ago. Is that so hard to implement for some reason?
Related notice: I really enjoy using haproxy for load balancing.
(unless I'm googlin' it wrong - all info points to using with acme.sh)
It's amazing to me that people are still addicted to it.
https://www.themoscowtimes.com/2019/12/13/russia-nginx-fsb-r...