Give a user a option for use IPv6 only, and if the user need legacy IP add it as a additional cost and move on.
Trying to keep v4 at the same cost level as v6 is not a thing we can solve. If it was we wouldn't need v6.
Before someone mentions tunnels: Last time I tried to set up a tunnel Happy Eyeballs didn't work for me at all; almost everything went through the tunnel anyway and I had to deal with non-residential IP space issues and way too much traffic.
Discussions about IPv6 quickly end with "we have enough v4 space and there are no services that require v6 anyway". As long as the extra cruft for v4 support remains free or even supported, large ISPs won't care. We're at the point where people need to deal with things like peer to peer connectivity with two sides behind CGNAT which require dedicated effort to even work.
I know it sucks if none of the ISPs in your area support IPv6 and you're left with suboptimal solutions like tunnels from HE, but I think it's only reasonable all this extra cost or effort becomes visible at some point. Half the world is on v6, legacy v4-only connections are becoming the minority now.
It is also available for one of my phone contracts but not tried enabling it yet.
One simple way to check if your ISP have some kind of IPv6 netowork is to see if CDN domains given by YouTube and Facebook have AAAA records.
We shouldn't have to ask for ISPs to add IPv6 support but here we are.
>legacy IP
lol
With this IPv4 trick, if your employer or university only provides IPv4 you can use the product anyway.
> they could pay a small extra for a dedicated IPv4 address.
Did you mean that the dedicated IPv4 address to connect via SSH? Then my objection doesn't apply.
The HTTP traffic goes to a server (a reverse proxy, say nginx) on the host, which then reads it and proxies it to the correct VM. The client can't ever send TCP packets directly to the VM, HTTP or otherwise. That doesn't just magically happen because HTTP has a Host header, only because nginx is on the host.
What they want is a reverse proxy for SSH, and doesn't SSH already have that via jump/bastion hosts? I feel like this could be implement with a shell alias, so that:
ssh user@vm1.box1.tld becomes: ssh -j jumpusr@box1.tld user@vm1
And just make jumpusr have no host permissions and shell set to only allow ssh.
That's one implementation. Another implementation is the proxy looks at the SNI information in the ClientHello and can choose the correct backend using that information _without_ decrypting anything.
Encrypted SNI and ECH requires some coordination, but still doesn't require decryption/trust by the proxy/jumpbox which might be really important if you have a large number of otherwise independent services behind the single address.
At that point you run into the problem that SSH doesn't have a host header and write this blog post.
> Proceeds to explain how the HTTP traffic flows based on the hostname.
If you wanted to flex on your knowledge of the subject you could have just lead the whole explanation with
>"I know all about this, here's how it works."
Also
>"What they want is a reverse proxy for SSH"
They already did this, I'm much more impressed by the original article that actually implemented it than by your comment "correcting them" and suggesting a solution.
In 2024-2025, I did a survey of millions of public keys on the Internet, gathered from SSH servers and users in addition to TLS hosts, and discovered—among other problems—that it's incredibly easy to misuse SSH keys in large part because they're stored "bare" rather than encapsulated into a certificate format that can provide some guidance as to how they should be used and for what purposes they should be trusted:
https://cryptographycaffe.sandboxaq.com/posts/survey-public-....
See ssh_config and ssh-keygen man-pages...
A rather niche use-case to promote certificate auth... I'd add the killer-app feature is not having to manage authorized_keys.
> where the affected users might be surprised or alarmed to learn that it is possible to link these real-world identities.
I feel like it's obvious that ssh public keys publically identifies me, and if I don't want that, I can make different keys for different sites.
You can try it yourself [0] returns all the keys you send and even shows you your github username if one of the keys is used there.
[0] ssh whoami.filippo.io
"Did you know that ssh sends all your public keys to any server it tries to authenticate to?"
It should be may send, because in the majority of cases it does not in fact send all your public keys.
This is just an awfully designed feature, is all.
Are you AI?
You can wildcard match hosts in ssh config. You generally have less than a dozen of keys and it's not that difficult to manage.
I have the setting to only send that specific host’s identity configured or else I DoS myself with this many keys trying to sign into a computer sitting next to me on my desk through ssh.
Like I can’t imagine complaining about adding 5 lines to a config file whenever you set up a new service to ssh onto. And you can effectively copy and paste 90% of those 5 short lines and edit the hostname and key file locations.
So far it feels like only LDAP really makes use of it, at least with the tech I interact with
I also know of https://github.com/Crosse/sshsrv and other tricks
I agree more SRV records would have helped with a tremendous number of unnecessary proxies and wasted heat energy from unnecessary computing, but in this day and age, I think ECH/ESNI-type functions should be considered for _every_ new protocol.
Overall, DNS features are not always well implemented on most software stack.
A basic example is the fact that DNS resolution actually returns a list of IPs, and the client should be trying them sequentially or in parallel, so that one can be down without impact and annoying TTL propagation issues. Yet, many languages have a std lib giving you back a single IP, or a http client assuming only one, the first.
I have an architecture with a single IP hosting multiple LXC containers. I wanted users to be able to ssh into their containers as you would for any other environment. There's an option in sshd that allows you to run a script during a connection request so you can almost juggle connections according to the username -- if I remember right, it's been several years since I tried that -- but it's terribly fragile and tends to not pass TTYs properly and basically everything hates it.
But, set up knockd, and then generate a random knock sequence for each individual user and automatically update your knockd config with that, and each knock sequence then (temporarily) adds a nat rule that connects the user to their destination container.
When adding ssh users, I also provide them with a client config file that includes the ProxyCommand incantation that makes it work on their end.
Been using this for a few years and no problems so far.
It's a nice solution but I've been looking for something more transparent (getting them to configure an SSH key is already difficult for them). A reverse proxy that selects backend based solely on the SSH key fingerprint would be ideal
1. Client side: ProxyJump, by far the easiest
2. Server side: use ForceCommand, either from within sshd_config or .ssh/authorized_keys, based on username or group, and forward the connection that way. I wrote a blogpost about this back in 2012 and I assume this still mostly works, but it probably has some escaping issues that need to be addressed: https://blog.melnib.one/2012/06/12/ssh-gateway-shenanigans/
Another thing that just crossed my mind is that the proxy IP cannot be reassigned without the client popping up a warning. That may alarm security-conscious users and impact usability.
> unexpected-behaviour.exe.dev
That is not a URL, that's a fully qualified domain name (FQDN), often referred to as just 'hostname'.
Good write up of a tricky problem, and glad to real-world validate the solution I was considering.
Certificate signing was done by a separate SSH service, which you connected too with enabled SSH agent forwarding, pass 2FA challenge, and get a signed cert injected into your agent.
I'd love to learn more about how you solved it and what I may be mistaken about.
Not exactly what i built in for, but it'll do the job here too, and able to connect to private addresses on the server side.
Well, we're implicitly trusting the host when running a VM anyway (most of the time), but it's something I'd want to check before buying into the service.
EDIT: Ah, it's probably https://github.com/boldsoftware/sshpiper
will try to remember to look later.
[0]: https://www.ietf.org/archive/id/draft-michel-ssh3-00.html
I also know how to use SRV records so this is a non-issue for me and everyone I work with.
This could also have been solved by requiring users to customize their SSH config (coder does this once per machine, and it applies to all workspaces), but I guess the exe.dev guys are going for a "zero-config, works anywhere" experience.
The port issue is also boringly practical. A lot of corp envs treat 22 as blessed and anything else as a ticket, so baking the routing into the name is ugly but I can see why they picked it, even if the protocool should have had a target name from day one.
Like, I understand the really restrictive ones that only allow web browsing. But why allow outgoing ssh to port 22 but not other ports? Especially when port 22 is arguably the least secure option. At that point let people connect to any port except for a small blacklist.
One similar example of SSH related UX design is Github. We mostly take the git clone git@github.com/author/repo for granted, as if it were a standard git thing that existed before. But if you ever go broke and have to implement GitHub from scratch, you'll notice the beauty in its design.
You can front a TLS server on port 443 and then redirect without decrypting the connection based on the SNI name to your final destination host.
Take a look at this repo: https://github.com/mrhaoxx/OpenNG
It allows you to connect multiple hosts using the same IP, for example:
ssh alice+hostA@example.com -> hostA
ssh alice+hostB@example.com -> hostB
Still, this is the best zero-config solution in my opinion, much simpler than the solution they decided to go with.
>with SSH server
My comment was about how you do not need an ssh server. The idea of a server exposing a command line that allows potentially anything to be done is not necessary in order to manage and monitor a server.