That seems a bit silly to me as it means something like rot1376 is tricky to parse correctly. I'd think it would make sense for the protocol to need to be separated from the port by a non-numeric character, like a hyphen. Like that it would be trivial to parse: just grab all the numeric characters from the end until we find the non-numeric character.
Historically, Plan 9 had IL as well.
Which transport protocol has a number in its name ?
If I implement one someday I'll add a separator.
cgi as a protocol, really a calling convention for launching processes, make sense, it fits a specific niche in the ecosystem. fastcgi does not, fastcgi is a different incompatible http, that is, fastcgi does nothing that http does better. Did we all just collectively loose our ability to think critically, locked on to the fact that the common usage was that a http server launches a cgi process and when the time came to make that process it's own service, said we need cgi for services, creating fastcgi, forgetting that http already works just fine as a service?
I am not really a back-end programmer, I am a sys-admin who sometimes makes web-based tooling. it is very possible there is a subtlety to this I missed. But I was a lot happier when I gave up on fastcgi and just made each service a http server with a reverse proxy in front to dispatch the requests.
> do have an issue with programs that have a lot of startup overhead (e.g. things written in Python
Unless you have a lot of startups its probably not a problem, I would have thought.
If you wrote your service in C like god intended(sarcasm) this is not a problem, unix systems are traditionally by design very good at starting processes.
However python(my favorite language for what it's worth) has a lot of baggage it needs to sort out when it starts. so python specifically and any interpreted language that takes more than a few milliseconds to start in general starts to suffer under heavy loads in the one process per request model.
Thus the motivation to make it one process for many requests.
The figures from 1990 are not the same as the figures from 2024. "heavy" here is so high it is not a realistic problem for 80% of sites.
On current hardware it can serve up to a few hundreds requests/s without too much trouble.
There's also the trick of pre starting a pool of processes beforehand and handing the data to them when it comes. It is not implemented in listen yet, but would not be too hard to do.
This was true of cgi scripts written in perl too, until they made the frontpage of slashdot.
that being said, didn't selinux resolve that problem decades ago?
I'm not aware of how selinux can solve this but I will look into it if only just to mention it as an alternative.
allow foo_t http_port_t : tcp_socket name_bind ;
the biggest problems are that you need to a) confine your users b) label everything