The ease of use and high quality of the Go SSH libraries (golang.org/x/crypto/ssh) is a killer feature of Go, imho.
Also, there is a high level abstraction, github.com/gliderlabs/ssh, which makes it completely trivial to embed an ssh server into an application, giving you a nice way to inspect counters and flip feature flags and tuneables.
The knownhosts handling in particular has a bunch of common land-mines. I'm the maintainer of a wrapper package https://github.com/skeema/knownhosts/ which solves some of them, without having to re-implement the core knownhosts logic from x/crypto/ssh.
Just to illustrate how common these land-mines are, my wrapper package is imported by 8000 other repos on GitHub, although most of these are indirect dependencies: https://github.com/skeema/knownhosts/network/dependents
That was my experience with CPAN, anyway. It's not perfect but it's miles above other language module cultures.
I originally created my knownhosts wrapper to solve the problem of populating the list of host key algorithms based on the knownhosts content. Go's x/crypto/ssh provides no straightforward way to do this, as it keeps its host lookup logic largely internal, with no exported host lookup methods or interfaces. I had to find a slightly hacky and very counter-intuitive approach to get x/crypto/ssh to return that information without re-implementing it.
And to be clear, re-implementing core logic in x/crypto/ssh is very undesirable because this is security-related code.
You would hope a new module would reuse as much previous base modules as they can, but sometimes it's enough to just put some new code in that namespace, with the intent then that someone will find it easier, and build off of it. The hierarchy is for organization, discovery and distribution, as much as it is about good software development practice. The goal being to improve the overall software development ecosystem.
(and I was a professional Perl programmer for the first 5 years of my career, so I'm not asserting this out of lack of familiarity with CPAN!)
That all said: I don't even think what you're saying about CPAN is terribly similar to the situation being discussed here, since Go's x/crypto/ssh (and all other x/ packages) are officially part of the Go Project and are maintained by the Go core maintainers. See https://pkg.go.dev/golang.org/x. Third-party Go developers cannot add new packages to this namespace at all.
Everything you've said sounds great, with the assumption that the maintainers can maintain their pieces indefinitely and independently. But we're mortal. And I know the independent maintainers in places like CPAN are humans, not companies.
I guess it's a sign you're getting old when you start worrying about this kind of thing
If nobody wants to maintain the old code, or the design wasn't ideal, often times people will create a "v2" or "-ng" rewrite of it and try to keep backwards compatibility. Then the people who made sub-modules can simply publish their modules on top of the new base module. Old code continues running with the old dependencies until somebody links the old code to the new base module.
We found the native Go SSL libraries (as used in, e.g. the http package natively) to add many ms to web api calls. We eventually substituted OpenSSL (despite not really wanting to). It significantly sped up the app.
YMMV, this is for ARM 32-bit targets.
If you're having performance issues with TLS I would look at what sort of crypto you're using. At least for SSH, RSA is dog slow. It wouldn't surprise me if you can irk out quite a bit of performance by switching to ed25519.
Did you tried with GOEXPERIMENT=boringcrypto ?
Ansible facts can probably be a cross platform way to collect most of the information you need. For the usecases where scp'ng the binary is needed, I think ansible supports jumphost config too. But I agree that for one off tasks, running a single binary is convenient compared to setting up ansible.
Basically, I want app/kinds-of-data and not the other way around.
Don't get me wrong -- some of the choices made by the XDG/FreeDesktop folks rub me the wrong way too ...
~/.cache and ~/.config and ~/.local/share and ~/.local/state and ~/.local/bin
I used to get annoyed by non-compliance to XDG. Now I wonder if I'd actually prefer apps to reverse the hierarchy (eg, ~/.apps/nvim/{cache,config,state}).
Make it clear what needs to be backed up, what is ephemeral, and so on. Just put everything in ~/.cache. Chromium in particular is bad at this and has many types of cache.
This is a huge part of why I like docker-compose and docker in general, I can put everything I need to backup in a set of volume maps next to each other.
Is the spec perfect? No, of course not. But is it thoughtful, and does it address genuine needs? Yes, certainly.
a) store caches & libdata on different disk
b) consistently 'reset' cached data for kiosk style logins
c) make config read-only, or reset to a known good state
d) Roaming profiles where the cache is excluded from sync across machines
Most computers + home directories are 'personal' where this largly doesn't matter, but there are often sound operational reasons for this seperation in cases where you are responsible for a fleet of computers. I too perfer the 'everything related to this app in one dir' approach. Crazy idea: for apps adhering to XDG, you could point all these vars at a directory under a FUSE-style mount, which then remaps the storage any way you'd like. :)
Although I'll never forgive XDG for renaming etc to config and var to state. Would be so convenient to set PREFIX=~/.local for some things
I have the same issue with the scripts which trigger `rsync` getting confusingly complex because of all the include/exclude arguments.
https://manpages.debian.org/bookworm/rsync/rsync.1.en.html#f...
Lots of things like the Rust tool chain now create the CACHEDIR.TAG files so that backup tools can ignore that part of the hierarchy. Alas, I believe the rsync folks refuse to implement it.
Not only did they fragment the ecosystem with their self-defined standards, their standard contains a whole search path with the priority hierarchy baggage, but unspecified enough that all software does it differently.
Just ignore it and pretend it doesn't exist.
So, neither one really.
Disclaimer: I'm usually very good at hitting the ground running, but I am just as much bad at "keeping the pace", i.e. diving deep into stuff
Go is just easier to read. You don't have a lot of generics typically to assemble in your mental model, no lifetimes to consider, no explicit interface implementations, and so on. All of those things in Rust are great for what they do, but I think it makes it more difficult to breeze through a codebase compared to Go.
At a beginner level, rustlings[1] is an excellent resource for following along with any book/tutorial and do relevant exercise to apply the concepts from the learning material.
On a more higher level, I guess (re)implementing some tool that you use daily is another way to deep dive into rust. I suspect it's one of the reasons why we see an unusual number of "rewrite of x in rust" projects.
One resource I would highly recommend after the basic stuff people always recommend is a book called "Learn Rust With Entirely Too Many Linked Lists".
Antiquated and verbose error handling model. The reliance on code generation because of the lack of a decent type system. The fact you have to carefully read through every function because it's not immutable by default, has pointer arguments and no functional operations e.g. filter.
It's a language that belongs back in the 1990s.
Well done!
Otherwise it looks good, great job !
1. What happens if the tunnels breaks? Does it retry instantly? Is there any sort of exponential backlog time? Just wondering if the server is down, if it would spike the cpu or would be gentle (while still fast enough)
2. Would you be adding support for Socks Proxy? The ssh command is quite simple, and it is as useful as regular remote and local tunnels.
I don't think "I made X to do Y" ever means "I made X do Y" does it?