SSHD (server/non-client) still support admin-defined by having system-wide settings done firstly. For those who have multi-file SSHD configurations, breakdown of the many config file locations and scopes here as it covers default user, system-wide, specific user:
https://egbert.net/blog/articles/ssh-openssh-options-ways.ht...
Also I broken out each and every SSHD and SSH options along with their ordering by execution by using file name and numbering as well as its various state machine, dispatch, CLI equivalence, network context, and function nesting, all in:
https://github.com/egberts/easy-admin/tree/main/490-net-ssh
https://github.com/egberts/easy-admin/blob/main/490-net-ssh/...
Disclaimer: I do regular code reviews of OpenSSH and my employer authorizes me to release them (per se contract and NDA)
Also this showed how to properly mix and match authentication types using OR and AND logic(s) in
https://serverfault.com/a/996992
It is my dump mess so wade 'em and enjoy.
"In placing configuration files higher than user-defined configuration but Only with SSH client, can want..."
Also the most dangerous but flexible way to authenticate a user.
https://jpmens.net/2019/03/02/sshd-and-authorizedkeyscommand...
https://egbert.net/blog/articles/openssh-file-authorized_key...
Only misgiving is that the key management issues have worsen only for the key administrator(s). But it is a viable and sustainable AA model because there is the most important security component: instant denial of a user and/or a equupment.
Are you using the verboten Chrome and its inability to negotiate and defer to server absolut side of ChaCha20-Poly1305 with sha512? It refuses client-demanded Chrome-forced ChaCha/sha256, AES and then RSA.
Starts reading with /etc/ssh/sshd.d directory which can provide admins to give/takeaway what user can specify in their user config files then OpenSSH reads in the user-defined configuration in $HOME/.ssh/sshd.d.
Inserting configuration items into system config directory takes away user's ability to use nor change.
Removing from system directory reverts to a user-changeable default settings. Adding to user-directory (without any in system directory) gives user that choice.
For finer granularity of option usage, remove said option from both system directory and user config files then insert into last of lexical ordering config files (typically 99-something.conf or 999-something.conf) and place a couple under Match/MatchGroup using deny/accept.
You have to deal with ordering issues, symlink management in some cases, and unless the "namespace" of sorting number prefixes is strictly defined, it's never something that's convenient or durable to "patch" new files into. The proliferation of 99_* files shows the anti-utility this actually provides.
I much prefer configuration files with a basic "include" or "include directory" configuration item. Then I can scope and scale the configuration in ways that are useful to me and not some fragile distribution oriented mechanism. Aside from that with xz I don't think I want my configurations "patchable" in this way.
If you have one big file then different tools, or even the same tool but different points of that tools life cycle, can result in old config not correctly removed, new config applied multiple times, or even a corrupt file entirely.
This isnt an issue if you’re running a personal system which you hand edit those config files. But when you have fleets of servers, it becomes a big problem very quickly.
With config directories, you then only need to track the lifecycle of files themselves rather than the content of those files. Which solves all of the above problems.
Either way, my notion about doing it properly is to have a set of scripts (ansible/terraform?) that rebuild the configuration from templates and rewrite, restart everything. Afaiu, there's no "let's turn that off by rm-ing and later turn it on again by cat<<EOF-ing", cause there's no state database that could track it, unless you rely on [ -e $path ], which feels not too straightforward for e.g. state monitoring.
(I do the same basically, but without ansible. Instead I write a builder script and then paste its output into the root shell. Poor man's ansible, but I'm fine.)
So as I understand it, these dirs are only really useful for manual management, not for fleets where you just re-apply your "provisioning", or what's the proper term, onto your instances, without temporary modifications. When you have a fleet, any state that is not in the "sources" becomes a big problem very quickly. And if you have "sources", there's no problem of turning section output on and off. For example when I need a change, I rebuild my scripts and re-paste them into corresponding instances. This ensures that if I lose any server to crash, hw failure, etc, all I have is to rent another one and right-click a script into its terminal.
So I believe that gp has a point, or at least that I don't get the rationale that replies to gp suggest itt. Feels not that important for automatic management.
Yeah a good templating language will allow you to apply conditionals but honestly, it’s just generally easier to have different files inside a config directory than have to manage all these different use cases in a single monolithic template or multiple different templates in which you need to remember to keep shared components in sync.
At the end of the day, we aren’t talking about solving problems that were impossible before but rather solving problems that were just annoying. It’s more a quality of life improvement than something that couldn’t be otherwise solved with enough time and effort.
Although I think that it sounds so ansible, and that's exactly why I avoid using it. It is a new set of syntax, idioms, restrictions, gotchas, while all I need is a nodejs script that joins initialization template files together, controlled by json server specs. With first class ifs and fors, and without "code in config" nonsense. I mean, I know exactly what was meant to be done manually, I just need N-times parametrized automation, right? Also, my servers are non-homogeneous as well, and I don't think I ever had an issue with that.
Managing files instead of one big monolithic config is just easier because there’s less chance of you foobarring something in a moment of absentmindedness.
But as I said, I’m not criticising your approach either. Templating your config is definitely a smart approach to the same problem too.
You don't want instance-local changes. Do you? Afaiu, these changes are anti-pattern cause they do not persist in case of failure. You want to change the source templates and rebuild-repropagate the results. Having ./foo.d/nn-files is excessive and serves litte purpose in automated mode, unless you're dealing with clunky generators like ansible where you only have bash one-liners.
What am I missing?
But then those clunky generators do solve different problems too. Though I’m not going to debate that topic here right now, besides saying no solution is perfect and thus choose a tech stack is always a question of choosing which tradeoffs you want to make.
However on the topic of monolithic config files vs config directories, the latter does provide more options for how to manage your config. So even if you have a perfect system for yourself which lends itself better for monolithic files, that doesn’t mean that config directories don’t make life a lot easier for a considerable number of other systems configurations.
Since pretty much every file has different syntax, this is virtually impossible to do any other way.
You don't have weird file patching going on with the potential to mess things up in super creative ways if someone has applied a hand edit.
With .d directories you have a file, you drop in that file, you manage that file, if that file changes then you change it back.
I don't grep this argument at all. It feels like everyone's comparing to that "regular [bad] detergent" in this thread. A templating system will be as good and as error-prone to change and as modular etc as you make it, just like any program.
It applies only to local patchers (like e.g. certbot nginx) and manual changes, but that's exactly out of scope of templating and configuration automation. So it can't be better, cause these two things are in XOR relationships.
Edit to clarify: I don't disagree with foo.d approach in general. I just don't get the arguments that in automation setting it plays any positive role, when in fact you may step on a landmine by only writing your foo.d/00-my. Your DC might have put some crap into foo.d/{00,99}-cloud, so you have to erase and re-create the whole foo.d anyway. Or at least lock yourself into a specific cloud.
If this is not how modern devops/fleet management works, I withdraw my questions cause it's even less useful than my scripts.
1. They add huge configuration files where 99% are commented out.
2. Sometimes they invent whole new systems of configuration management. For example debian with apache httpd does that.
I don't need all of that. I just need simple 5-line configuration file.
My wish: ship absolutely minimal (yet secure) configuration. Do not comment out anything. Ask user to read manuals instead. Ship your configuration management systems as a separate packages for those who need it. Keep it simple by default.
Is there a good reason for this design? I can’t think of one, again off the top of my head, but of course I could be missing something.
Another program with first-wins I've seen used dict/map for its config so the check was even simpler: "if optname not in config: config[optname] = parsed_value".
https://serverfault.com/questions/367085/iptables-first-matc...
The scheme you propose is insane and if it was ever used (can you actually back that up? The disk I/O would kill your performance for anything remotely loaded), it was rightfully abandoned for much faster and simpler scheme.
So... it doesn't parse them once! It just does its own[1] buffering layer and implements... exactly the algorithm I described? Not seeing where you're getting the "Nope" here, except to be focusing on the one historical note about RAM that I put in parentheses.
[1] Somewhat needless given the OS has already done this. It only saves the syscall overhead.
I/O is done piecewise, a line at a time. The file is never "loaded up". Again
you're applying an intuition about how parsers are presented to college
students (suck it all into RAM and build a big in-memory representation of
the syntax tree) that doesn't match the way actual config file parsers work
(read a line and interpret it, then read another).
So, the whole file is usually loaded up (if it's short enough). At this point you might as well parse all of it, instead of re-reading it from the disk over and over, and redoing the same work over and over; parsed configs — if they are parsed into falgs/enums, and not literal strings — usually take about the same, or less, memory than a FILE structure from libc does on the whole.The complexity of the algorithm is about the same, either the early exit is here or it isn't (in fact, the version with the early exit, now that I think of it, has larger cyclotomatic complexity but whatever).
I/O is done piecewise, a line at a time. The file is never "loaded up". Again you're applying an intuition about how parsers are presented to college students (suck it all into RAM and build a big in-memory representation of the syntax tree) that doesn't match the way actual config file parsers work (read a line and interpret it, then read another).
In said systems, RAM was such an expensive resource that we had to save individual bits wherever we could. Such as only storing the last two digits of the year (aka the millennium bug).
The computational cost of infrequently rescanning the config files then freeing the memory afterwards was much cheaper than the cost of storing those config files into RAM. And I say “infrequently rescanning” because you weren’t talking about people logging in and out of TSSs at rapid intervals.
That all said, sshd was first written in the 90s so I find it hard to believe RAM considerations was the reason for the “first match” design of sshd’s config. More likely, it inherited that design choice from rsh or some other 1970s predecessor.
And I repeat: first match involves less code. It's a simpler design. The RAM point was an interesting digression, I literally put it in parentheses!
The difference is just either: overwriting values or exiting in the presence of a match. Either way it’s the same parser rules you have to write for the config file structure.
For example:
- You might have one function that requires a little of all known hosts (so now your “stop” condition isn’t a match but rather a full set)
- another function that requires matching a specific private key for a specific host (a traditional match by your description)
- a third function that checks if the host IP and/or host name is a previously known host (a match but no longer scanning host names, so you now need your conditional to dynamically support different comparables)
- and a forth function to check what public keys are available which user accounts (now you’re after a dynamic way to generate complete sets because neither the input nor the comparison are fixed and you’re not even wanting the parser to stop on a matched condition)
Because these are different types of data being referenced with different input conditions, you then need your parser to either be Turing complete or different types of config files for those different input conditions thus resulting in writing multiple different parsers for each of those types of config (sshd actually does the latter).
Clearly the latter isn’t simpler nor less code any more.
If you’re just after simplicity from a code standpoint then you’d make your config YAML / JSON / TOML or any other known structured format with supporting off-the-shelf libraries. And you’d just parse the entire thing into memory and then programmatically perform your lookups in the application language rather than some config DSL you’ve just had to invent to support all of your use cases.
And for the record I'm not convinced your way is simpler. The code gets sprinkled with config loading calls instead of just checking a variable, and the vast majority of the parser is the same between versions.
We're done. You're "not convinced" my way is simpler because you're not willing to give ground at all. This is a dumb thing to argue about. Just look at some historical parsers for similar languages, I guess. Nothing I've said here is controversial at all.
Different people are making different points. Nobody is arguing in bad faith.
sudo sshd -T | grep passwordsshd -T reads the configuration file and prints information. It doesn't print what the server's currently-running configuration is: https://joshua.hu/sshd-backdoor-and-configuration-parsing
Every configuration change immediately applies to every new connection - no need to restart the service!
What good does cloudinit do really?
It's useful for initializing state that could not have been initialized before booting in the target environment. Canonical example, I guess, being ssh server and client keys management, but the list of modules it implements is long.
Well, why would it come up? You don't need to worry about things you don't need to worry about.
but i guess learning is better late than never type of thing.
also what confuses people more on this is that openssh is properly designed, so configs are first seen wins. exactly so that file 0_ wins from 99_... but most people see badly designed software where 99_ overrides 0_. openssh way is exactly so it works best with confir or ssh/options where it matches by hosts and you can place the more important stuff first without fear defaults will override it.
They've updated the documentation on /etc/ssh/sshd_config https://bugs.launchpad.net/ubuntu/+source/cloud-init/+bug/20...
The best reason to do it this way seems to be that files are the unit of package management. Perhaps we need a smarter package manager.
My nginx.conf life got better when I deleted sites-available and sites-enabled and defined my sites inline in nginx.conf.
The only thing worse is when the configuration is actually a program that generates the configuration, like in ALSA.
And the only thing worse than ALSA style is Xorg style, with a default configuration generated by C code and you can only apply changes to it without seeing it. Xorg also has this weird indirection thing where the options are like Option "Foo" "Bar" instead of Foo "Bar", but that's a nitpick in comparison.
I checked on my OpenBSD (7.6) System and Slackware (15.0) and that directory does not exist. I checked the man page for sshd and there is no mention of that dir.
Is this a new thing/patch Linux people came up with ?
Also, if it's not too much trouble, would someone help me understand why such files are required to start with numbers? In this case it's 10-no-password.conf.
I have noticed similar structure for apt and many more packages
You should be able to set
alias foo='foo -p 80'
And still write foo -p 81
espeak suffers this afflictionWhat made it obsolete?
No tool can protect you from your own assumptions about how said tool works.
But as a design approach, most designs go for the “principle of least surprise.”
And that’s how I read the original comment: a well designed system wouldn’t do this. Joke is on them, though, because nobody designed this.
> ... a well designed system wouldn’t do this. ...
A well designed system would be able to explain their decisions and document that somewhere. Perhaps in the manual.
As you say, I am used to checking the docs in Linux. It’s the convention that no convention shall be assumed. Is that good design?
Glad to make your day. You are welcome.
Ultimately we all end up reading the manual. I'd still prefer if I didn't have to remember how a certain C stdlib function works vs. what seems intuitive.
But that's a lost cause with a lot of people. They'll happily point out how "intuitive" differs among different groups of people and all that, merrily missing the point on purpose.
Oh well. At least I found out without locking my self out of my servers.
Intuitive is highly subjective, it might be intuitive to you, but not for others, and vice versa, and it is part of the job to read the "manual instruction".
> But that's a lost cause with a lot of people. They'll happily point out how "intuitive" differs among different groups of people and all that, merrily missing the point on purpose
What is your point? Are you arguing against documentation? You told me you are not averse to reading the documentation, yet you are complaining about it and bringing "intuition" into this. I am confused. Could you clarify your point?
My point is that intuitive is as not as subjective as you make it out to be. Which is partially reinforced by a lot of software where "last one wins" is the policy. This example here sticks out like a sore thumb.
No point pursuing however because you seem hellbent on defending tradition which is something that tires and bores me.
Just agree to disagree, move on.
Moving on.
Taking a note of your username. I'll go out of my way to avoid you.
Bye.
Please do, I am the guy who writes and reads documentation.
In the case of strtok, I am not going to implement my own if strtok does what I want it to do, and behaves how I know it does. Why would I?! Sometimes what I need is strtok, sometimes strsep, sometimes I may use strtok_r.
You're doing a big assumption that people are averse to reading documentation.
You are likely downvoted because you prefer to make your opponents look irrational so you can easily defeat them.
Tearing down a straw man is not a welcome discussion tactic around here. Maybe that can help you.
> To me "first one wins" might be intuitive TO YOU, but to me "last one wins" is.
What I mean is that "first one wins" might be intuitive to you, but to me "last one wins" is, and apparently I was wrong, but I would have known at least, because I do read documentation.
It does make sense, indeed, that "first one wins", though.
> Why would "last one wins" be dumbing down the tool, exactly?
I did not refer to that as dumbing down the tool. That said, if you are unsure whether it is first or last one wins, read the documentation. There is no objective intuition here. To me "first one wins" might be intuitive TO YOU, but to me "last one wins" is.
> You're doing a big assumption that people are averse to reading documentation.
Some people definitely are, and they openly tell you that on here, too.
If you look further into my comments where I discuss "strtok", you will see it for yourself.
> You are likely downvoted because you prefer to make your opponents look irrational so you can easily defeat them.
I got down-voted because I claimed strtok is straightforward to use once you have read the documentation. I do not see how I am making them look irrational either (nor is it my intention). I am just trying to encourage people to read the documentation.
Modern programming is not like 30 years ago. We have literal hundreds, if not thousands, of bits and pieces to assemble. I couldn't care less what some lone cowboy thought "strtok" should do decades ago. And how genius it seemed to him.
Apropos, why use "strtok" at all in this case, btw? Fine, the function might make perfect sense. The tool's behavior does not.
...But, well, he did made me care ultimately, right? But it's not welcome and now I think less of that person.
But again -- those were different times. To me if you don't do what seems intuitive (and yes I am replying to your comment here after the other and yes I am aware we'll never agree on what's intuitive), defined also broadly as "what many other programs do" then you are just John Wayne-ing your way into my hatred.
Nevermind though. I knew some UNIX cowboys of old. I don't miss them one bit. The way this tool behaves smells very strongly of them.
I do not know about you, but I write documentation for my programs, both a manual page, and comments in the source code. If you do this as well, then why would you do that? What if someone blames your tool just because they did not read the documentation?
> I couldn't care less what some lone cowboy thought "strtok" should do decades ago. And how genius it seemed to him.
If you do not know how strtok behaves, that is your problem, it is well-documented. If you do not want to read its manual page, just roll your own for all I care.