This surprised me. I was like, "surely socket()/connect()/send()/recv() aren't async signal safe!" But after a quick trip to `man signal-safety`, it turns out they are, which surprised me. I guess it shouldn't, perhaps: likely all of those functions are little more than wrappers around the corresponding syscalls, so there isn't any libc state to possibly corrupt or deadlock you if you use them in a signal handler. And I assume the kernel needs to keep itself in a consistent, non-deadlockable state before it calls a signal handler anyway.
(And I'm not at all surprised that whatever TLS library they're using calls things or is itself not async signal safe.)
Either way, wow! In 2026 it feels absolutely bonkers that a software dev team would continue to put out something like this. Honestly, once psql got TLS support, when you make a TLS connection it should have put up a big warning and ask you, "This program cannot cancel queries over a secure channel; do you still want to enable query cancellation?" Or hell, just disable query cancellation in those cases and not even give an option.
I guess this is "just" a DoS, though, and only in cases where someone authorized is poking around using psql while connected to a server exposed to the public internet. Hopefully that situation isn't common. And even if it is, there's no opportunity for data exfiltration or RCE, so... the author's "heebie-jeebies level 6" feels appropriate.
(And there's an easy mitigation if you know the issue: once you cancel a query with ctrl+c, quit the psql session and start a new one. That will give you the process a new "cancellation key", and the old one from the old process won't work for an attacker anymore.)
The downside is that sometimes connections are proxied in ways that lose these unusual packets. Looking at you, Docker...
Reading the original RFC 793 it's clear that the intention was never for this to be OOB data, but to inform the receiver that they should consume as much data as possible and minimally process it / buffer it locally until they have read up to the urgent data.
However, the way it was historically implemented as OOB data seems to be significantly more useful - you could send flow control messaging to be processed immediately even if you knew the receiving side had a lot data to consume before it'd see an inline message.
It seems nowadays the advice is just to not use urgent data at all.
Still an interesting read. Just wondering, why can't the TCP connection of the query not be used to send a cancellation request? Why does it have the be out of band?
The TCP URG bit came out of this kind of problem. It triggers a SIGURG signal on UNIX which interrupts the process. Oracle works this way.
These days you'd implement cancellation by having one thread handle inbound messages and another thread do the actual work with shared memory to implement a cooperative cancellation mechanic.
But we should in general have sympathy here. Very little software and very few protocols properly implements any form of cancellation. HTTP hardly does for normal requests, and even if it did, how many web servers abort request processing if the connection drops?
https://datatracker.ietf.org/doc/html/rfc6093:
“it is strongly recommended that applications do not employ urgent indications. Nevertheless, urgent indications are still retained as a mandatory part of the TCP protocol to support the few legacy applications that employ them. However, it is expected that even these applications will have difficulties in environments with middleboxes.”
I don't think I have ever seen a published web service which error log wasn't full of broken pipe messages. So, AFAIK, all.
Doesn't necessarily need a thread per connection. Could be on an epoll/kqueue/io-uring.
The query would need to periodically re-check a cancellation flag, which has costs and would come with a delay if it's particularly busy.
Changing that to poll for a cancellation while working is a big change. Also, the server would need to buffer any pipelined requests while looking for a cancellation request. A second connection is not without wrinkles, but it avoids a lot of network complexity.
https://learn.microsoft.com/en-us/openspecs/windows_protocol...
At the receiver, a signal handler must be used, which will be invoked when an urgent packet is received, with SIGURG.
If you have shared psql sessions in tmux or on a jump box one bad cancel can trash someone else's work. 'Just firewall it' is how you end up owned by the intern with shell access.
In such a scenario, listening (and acting) on cancellation requests on the same connection becomes very hard, so fixing this goes way beyond "just".
The protocol has no direct in-protocol cancellation, like TDS has. TDS does this by making a framed protocol, at the application protocol level it can cancel queries. It has two variants (text and binary) and can cause fragmentation, and at the query and protocol level only supports positional parameters, no named parameters.
One a query is on the server, it doesn't support directly acting on a language mode. I don't want to go into SQL mode and create a PL/SQL proc, I just want direct PL/SQL. Can't (really) do that well. Directly returning multiple result sets (eg for a matrxi, separate rows, columns, and fields) or related queries in a single round trip is technically possible, but hard to do. So frustrating.
I've never had to connect to PostGres in an adversarial environment. I've been at work or at home and I connected to PostGres instances owned by me or my employer. If I tried to connect to my work instance from a coffee shop, the first thing I'd do would be to log in to a VPN. That's your multiplexed protocol layer right there: the security happens at the network layer and your cancel happens at the application layer.
This is a different situation from websites. I connect to websites owned by third parties all the time, and I want my communication there to be encrypted at the application layer.
This application of ipsec was never used and barely implemented. Today getting it to make ad-hoc connections is a tricky untested edge case and ipsec was regulated to dedicated tunnels. Where everyone hates it because it is too tricky to get the parameters aligned.
There is definitely a case to be made that it is right and proper that secure connections are handled in the application(tls), But sometimes I like to think of how it could have been. where all applications get a secure connection whether they want one or not.
As a useless dangling side thought, an additional piece would be needed for ad-hoc ipsec that as far as I know was never implemented, a way to notify the OS that this connection must be encrypted(a socket option? SO_ENC?). This is most of the case for encrypted connections being the duty of the application.
heroku's postgres database service still exposes itself on the public internet.
(Also I just learned that ASCII 0x2d/unicode U+002D is more properly called a "hyphen" [well, "HYPHEN-MINUS"], not a "dash".)
And as GP wrote, it would take a substantial amount of time to explain to a normie (infinite I'd say, but let's not despair).
https://www.postgresql.org/docs/current/protocol-message-for... / BackendKeyData
I'm fairly certain that this cancellation approach has nothing to do with UNIX networking assumptions, and everything to do with the connection/process model of PostgreSQL.
Creating a connection => starting a process and passing the accepted socket to it (so in-band cancel would have to go directy to the backend executing the query) + single-threaded backend process not reading from socket when executing a query, so it would get the cancellation request only after the query finishes (or even after all pipelined queries before it finish, which is even worse).
Likely psql doesn't even have a "main loop"; I expect it just blocks on recv() until it gets a response from the server. And on Linux, I think it will automatically restart/resume syscalls that were in progress when a signal fires, so you can't even rely on EINTR to get you out of that recv() so you could check a global flag that you could set in the signal handler.
Although, reading the sigaction() manpage, if you don't specify SA_RESTART, it shouldn't do this? (If they are using signal() and not sigaction(), it might always restart?) But still, not sure why they don't take that route. I imagine it would require much less of a refactor to set a global flag, and then always check it after a recv() fails with EINTR.
Sure, the "right" thing to do is have a global pipe, and instead of blocking in recv(), poll() on it with both the connection socket and the read end of the pipe. And I bet that would require a bit of a refactor. But a global flag is somewhere in the middle...
But who knows; I've never read their source code, so I expect they know what they're talking about when they say it's not a trivial fix.
TLS not being async signal safe is explicitly called out on the article as the reason the token is sent in clear text.
> Handle it as a synchronous event in a main loop
Of course of you rearchitect the client there are better solutions. But again, the article mentions that's not planned for now.
By comparison, delegating cancellation to a background background thread can be done non-intrusively. In principe no code outside the cancel path need changing.
Edit: the article mentions that there is a refactor in the works to implement cancel over tls [1]. Turns out that they decided to use a thread (with a pipe for signaling).
[1] https://www.postgresql.org/message-id/flat/DEY0N7FS8NCU.1F7Q...
pthread_create() isn't async signal safe, though, so they can't simply move their socket code for the cancellation into another function and call pthread_create() on it. They still have to get the main thread to stop doing what its doing (usually via the pipe trick) in order to create the thread, which could easily be a big refactor.
> Edit: the article mentions that there is a refactor in the works to implement cancel over tls [1]. Turns out that they decided to use a thread (with a pipe for signaling).
Seems odd to me to bother. If you have to do the pipe thing, why not just do the new connection for cancellation in the main thread once it sees the data on the pipe? I guess that way they can return control of the CLI to the user while they cancel in the background, rather than blocking the user while the cancellation is going on. But as a user, I kinda would like to know that the query I just cancelled actually got cancelled, a property that the old code has, but the new code won't.
(Presumably the new code can print a warning if cancellation fails, but it could take a long time to fail, and in the meantime the user has moved on.)