https://web.archive.org/web/19991109163128/http://www.dfw.ne...
Using named pipes in this manner also enabled a hackish method to create server-side dynamic web content by symlinking index.html to a file created in this manner, which was a secondary motivator, which seems kinda quaint and funny now, but at that time, it wasn't very long after just having finally decommed our gopher server, so fingerd was still a thing, Apache was fairly new, and I may still have been trying to convince management that the right move was not from ncsa httpd to Netscape Enteprise Server, but to Apache+mod_ssl. RSA patent licensing may still have been a thing too. Stronghold vaguely comes to mind, but I digress.
Yeah, programs that do stuff based on filename, like busybox. Oh, and this long forgotten artifact this article just reminded me of that I managed to find in the Wayback Machine, a tool to mknod a named pipe on a SunOS 4.1.4 machine, to get server-side dynamic content when remotely accessing a daemon that was supposed to return content from a single static file. Ah, memories.
To have two different things you need to run, now you need to have multiple copies of the same thing.
What does the parsing? How do you split out a URL, what order are the "flags" in? Do you have named arguments, etc? Well now you need to have your own custom parsing library instead of just using exactly what anyone else would use.
Where do you go for help? Do you rename it to my_program_help.exe then rerun it?
What about chaining things together? Anything dynamic? Is the caller script expected to rename your program before running it?
> fetch---api.github.com---repos/owner/project---q=stars>100---o=json.exe
Oh lord.
> Imagine install_PY3_MODULE_NAME.exe. It reads the filename, extracts the Python module name, downloads dependencies, sets up Python if needed, and creates a launcher. Rename it, and you have a new installer for a different project. Icons, mirrors, or other metadata can also live in the file as resources – all self-contained, all shareable.
Imagine changing that to "install_python.exe --module module_name".
The thing you really want to do instead is have a single executable, then have scripts or even aliases that are named for what they do that are super thing wrappers. One copy, no moving, renaming, anything.
`fetch---api.github.com---repos/owner/project---q=stars>100---o=json.exe`
and 50 different copies for various different projects, is replaced with
`fetch.exe`
and
`top_100_github_repos.exe`
`highest_rated_github_repos.exe`
`get_weather.exe`
Which are single line scripts that pass on arguments to the base program. Which also means you can fix any issues in one place.
IMV it's a clever trick, and like you my instinct is that if I attempted to integrate this into my own workflows, I would endure some sort of hardship down the line but it's not immediately obvious when or how. Or maybe for certain things it would be fine and less painful than other options, like other similarly clever tricks I felt uneasy about at first
And good luck trying to run the same programs with different arguments. You'll have to take turns renaming the file, or create hardlinks just for ephemeral arguments.
It can be useful but there's time and place to do it.
> And good luck trying to run the same programs with different arguments
I don't read the idea as trying to replace arguments as in remove, "don't ever use arguments anymore", but as a DSL for _allowing_ to pull supported arguments into the filename. Basically an args list preprocessor. That would only take away your freedom of including triple-dashes in your file name without there being consequences.
foo.exe
foo.config
foo.config contents:
--flag1 --flag3
--flag2 --flag5 some_param
run: `foo.exe --config 2`Compared to shell scripts, this is a portable solution that will work across different environments, and compared to including arguments in filenames, it's not insane and doesn't require duplicating the entire binary to maintain multiple configs. The only merit I see to the filename approach is that it gives you a single file instead of two (if you have exactly one config), but I don't think that tradeoff is worth it.
The biggest thing for me is that you can name the configurations for what they're doing not how.
For example:
> fetch---api.github.com---repos/owner/project---q=stars>100---o=json.exe
could become something like
top_starred_repos.bat/exe/sh
This also removes the need to get myself and others to battle escaping problems as you start adding arbitrary arguments into this.
With some agreement on mapping (maybe just `%HH` for anything outside `A-Z a-z 0-9 . _ -`), this could be completely standardized and made part of standard library argument parsers.
I could see a bunch of my utility scripts replaced with a python script and a `uv` shebang if this was in argparse.
> Flags are ephemeral – you have to share the command line or wrap it in a script. Scripts depend on environment, which can break portability. Filenames solve both: the program describes itself, requires zero setup, and any configuration can be shared by simply renaming the file.
[Emphasis added] Although I find a script that wraps the command and calls it more versatile, there might be some value in this idea for some very simple cases, like example #4.
That said, apparently there's cursed methods of having a universal shell/batch file of sorts, according to https://stackoverflow.com/questions/17510688/single-script-t....
Anyway, I'd argue for the vast majority of cases, a shell script that wraps the command and its flags is fine.
I get the problems but I don't think I've ever had both at once. A need to portably wrap and share a specific command line for a specific program?
For the case of broadcast it seems easiest to just document the proper command line options. For the case of "unicast" I can just ask the other person what their environment is so I can craft the appropriate wrapper for them.
The area of overlap in the Venn diagram is infinitesimally narrow.
I truly think it should be avoided in general, especially when what you actually want (in OP’s article) is a config file.
[0] https://github.com/Frizlab/frizlabs-conf/blob/44030f4123e683... (w/ that, also see the aliases in the folder of the script)
bunzip2 / bzcat / bzip2
gunzip / uncompress
unzip / zipinfo
pigz / unpigz
pkg-config / x86_64-pc-linux-gnu-pkg-config
perlbug / perlthanks
Use ls -li to show the inode number for each file or directory. For example:
$ ls -li /usr/bin/{bzip2,bunzip2,bzcat}
23069197 -rwxr-xr-x 3 root root 39144 Sep 5 2019 /usr/bin/bunzip2
23069197 -rwxr-xr-x 3 root root 39144 Sep 5 2019 /usr/bin/bzcat
23069197 -rwxr-xr-x 3 root root 39144 Sep 5 2019 /usr/bin/bzip2Somewhat similar to how busybox does its thing.