More at "jq is a symptom" - https://ilya-sher.org/2018/09/10/jq-is-a-symptom/
Related and might be interesting - "JSON tools for command line" - https://ilya-sher.org/2018/04/10/list-of-json-tools-for-comm...
Commands with support would detect the shell and output an SQL schema along with insert statements containing the result of the command on a special file descriptor. These would be applied to a newly created 'history#' table and aliased to the 'result' table. The standard output of the command would still be displayed as it is now.
You could then do things like 'ls' and then 'select name,size,owner from result where size > 1024 and owner != 0;' And continue to run queries against the last result until it's replaced by a new command. Or run queries against the 'history#' table or multiple tables to join results from multiple independent commands.
With support for JSONB in most SQL engines this would provide a nice mechanism for working with any sort of nested data structure from any given program or from plain JSON files if required.
Edit: in relation to the other sibling comment, Nushell was developed as an improvement over powershell. I use both and prefer nushell's syntax.
I did hear an opinion (somewhere else) that a shell shouldn't have associative arrays either and it was a mistake...
> Nushell
Nushell is a candidate and some people use it and like it. I personally disagree with the design. Then again, I'm not objective. I created Next Generation Shell. It's a fully fledged programming language with domain specific facilities (for running external programs for example). I like this approach more than trying to improve over a shell. It fits my brain better.
Nushell can run external programs too, and process their output. Something just clicked for me with Nushell.
I think it's about alignment of the project and how you think.
If it won't be it this time, I'm working on the user interface of Next Generation Shell now and I would like to encourage you to revisit the project once that is done.
Plan:
https://github.com/ngs-lang/ngs/wiki/UI-Design
https://github.com/ngs-lang/ngs/wiki/UI-Chain-Design
Tldr: interactive objects on the screen, each interaction results "interaction record" (structured data about what happened), semantic understanding of what's happening, record/replay facility.
$ echo "$json" | cljq '(? $ "root" * 1)'
more intuitive than the good ol' jq $ echo "$json" | jq '.root | map(.[1])'
Really, people should know by now that jq does point-free programming.Personally, I probably would've written '[.root[][1]]' for that problem myself though—not a huge fan of map/1.
1) I dislike that .[1] can be both an expression evaluated as a query and a "lambda". Really messes with my mind.
2) In my eyes, it's more intuitive because it looks like globbing and everybody knows globbing (this is the reason I use `**` too).
But yeah, this is a bit subjective. What isn't, though, is that I don't plan on adding much more than that; maybe merge, transform and an accessor using the same syntax. So if you know the host language, there's much less friction.
I really see this like Avisynth vs Vapoursynth.
Had similar thoughts a couple years ago, and wrote jql[0] as a jq alternative with a lispy syntax (custom, not Common Lisp), and I’ve been using it for command-line JSON processing ever since!
I like jq and gnuplot quite well. Makes me want to try CMake out ;)
echo "$SOME_JSON" | jq '.[]' --raw-output --compact-output | while read -r LINE ; do ... ; done
...lets you process stuff "record by record" pretty consistently. (and `( xxx ; yyy ; zzz ) | jq --slurp '.'` lets you do the reverse, "absorbing" multiple records into an array.Don't forget `--argjson`
echo "{}" | jq --argjson FOO "$( cat test.json )" '{ bar: $FOO }'
...lets you "load" json for merging, processing, formatting, etc. The leading "{}" is moderately necessary because `jq` technically _processes_ json, not generates it.Finally, it's a huge cheat code for string formatting!!
$ echo "{}" | jq \
--arg FOO "hello \$world" \
--arg BAR "complicated \| chars" \
--arg ONE 1 \
--arg TWO 2 \
'"aaa \( $FOO ) and \( $BAR ) and \( ($ONE | tonumber) + ($TWO | tonumber) ) bbb"'
"aaa hello $world and complicated \\| chars and 3 bbb"
...optionally with `--raw-output` (un-json-quoted), and even supports some regex substitution in strings via `... | gsub(...)`.Yes, yes... it's overly complicated compared to you and your fancy "programming languages", but sometimes with shell stuff, the ability to _CAPTURE_ arbitrary command output (eg: `--argjson LS_OUTPUT="$( ls -lart ... )"`), but then also use JSON/jq to _safely_ marshal/deaden the data into JSON is really helpful!
The --null-input/-n option is the "out-of-the-box" way to achieve this, and avoids a pipe (usually not a big deal, but leaves stdin free and sometimes saves a fork).
This lets you rewrite your first "pattern":
jq -cnr --argjson SOME_JSON "$SOME_JSON" '$SOME_JSON[]' | while read ...
We also have a "useless use of cat": --slurpfile does that job better: jq -n --slurpfile FOO test.json '{bar: $FOO[]}'
(assuming you are assured that test.json contains one json value; --argjson will immediately fail if this is not the case, but with --slurpjson you may need to check that $FOO is a 1-item array.)And of course, for exactly the single-file single-object case, you can just write:
jq '{bar: .}' test.json
Pipelines allow consistent syntax, but thanks for pointing out all the different variations of file support in jq.
This issue is almost negated today: I find myself no longer writing jq queries, or regular expressions (both I am quite proficient in,) but having AI write those for me. This is exactly where the so-called "vibe coding" shines, and why I no longer care about tool specific DSLs.
* [lqn](https://github.com/inconvergent/lqn) - query language and terminal utility for querying and transforming Lisp, JSON and other text files.
(by this person doing nice generative art: https://inconvergent.net/)
Can convert between JSON<>EDN<>YAML<>Transit easily, plus includes a nifty little query language that is basically Clojure, so data transformations/extraction ends up really simple and concise.
I've always liked jq for simple things, but since I never sat down to actually work through the syntax, harder things tend to be too complicated to figure out quickly. Usually end up using Jet instead as if you already know Clojure, you already know the query language Jet uses.
Please, don't do that!
1: If it speeds things up non-negligibly, there's almost always a way to get a similar speedup without setting safety to 0; e.g. if you check your types outside of your hot loops, the compiler is smart enough to omit type-checks inside the loop.
It's kind of like building in Debug mode in other languages. Internally and for testing, use (safety 3). If the code in question doesn't trigger any errors or warnings, then in most cases it's safe to turn (safety 0) and get the tiny performance boost.
I wouldn't recommend (safety 0) globally, but it's probably fine locally in performance critical code that's been tested well, but I do agree it's probably not worth going to (safety 0) in most cases.
The best solution is a compiler who's (speed 3) optimization level is smart enough to optimize out the unnecessary safety checks from (safety 3). I think SBCL can do that in some cases (the safety checks get optimized for speed, at least).
This is trivially not true. Consider:
(defun foo (x)
(declare (safety 0)
(type x (array fixnum (4)))
[Lots of code that doesn't trigger any warnings])
Then in a different source file doing e.g: (foo nil)
Nothing good will come of that.> I wouldn't recommend (safety 0) globally, but it's probably fine locally in performance critical code that's been tested well, but I do agree it's probably not worth going to (safety 0) in most cases.
> The best solution is a compiler who's (speed 3) optimization level is smart enough to optimize out the unnecessary safety checks from (safety 3). I think SBCL can do that in some cases (the safety checks get optimized for speed, at least).
The only thing I can think of is that I communicated things poorly in my comment, because this is nearly exactly what I was saying in my comment.
I think we both agree that 99.9% of the time it's not worth using (safety 0), though.
A more apples-to-apples comparison would be to use find {} + to pass multiple filenames to jq and output using input_filename.
E.g. Ruby:
require 'json'
script = ARGV.pop
$J = JSON.parse(ARGF.read)
puts JSON.pretty_generate(eval(script))
Lets you do:curl --silent https://api.vultr.com/v2/regions | ./jr '$J["regions" ].find{_1["id"]=="nrt"}'
To translate their example ( curl --silent https://api.vultr.com/v2/regions | rcl query input.regions.key_by(r => r.id).nrt )
(... if you want to commit evil monkey-patching sins:
#!/usr/bin/env ruby
require 'json'
class Hash
def method_missing(sym) = (((r = self.dig(sym.to_s)) and return r) or super)
end
script = ARGV.pop
$J = JSON.parse(ARGF.read)
script = "$J#{script}" if script.start_with?(".")
puts JSON.pretty_generate(eval(script))
Now you can do: curl --silent https://api.vultr.com/v2/regions | ./jr '.regions.find{_1.id =="nrt"}'
Or the more direct parallel: curl --silent https://api.vultr.com/v2/regions | ./jr '.regions.group_by{_1.id}.nrt'
Since I used ARGF, you either need to quote the script or avoid spaces; you could explicitly use STDIN instead, in which case it's marginally safer to skip the quotes, at the cost of not being able to give a filename as the last argument)
But it does also seem like a place where LLMs are handy. Why learn jq or regex or AWK, if you use them infrequently, when you can just ask an llm?
Edit: tutorial: https://earthly.dev/blog/jq-select/
As an example.. any candidate for replacing jq needs to be either faster or easier. If it's only a faster implementation, why change the query language? If it's only a different query language but not faster, then why not transpile the new query language into one that works with the old engine? Doing both at the same time without sacrificing completeness/expressiveness in the query language may warrant fragmentation of effort/interest, but that's a very high bar I would think..
Often enough, languages have weird accidental quirks of the implementation that resist fast alternative implementation. There are ways around this (and of course there's nearly always room to improve the original implementation without having to entirely redesign it), but sometimes it really is easier to just implement a different language.
[edit]
Removing "Unifont" from the font-family list fixes the problem, so I must have an issue with my unifont install?
For those that like that style, on Linux both Xfce and KDE have themes that replicate it for their window decorations (recommending the desktop environment would be a bit too much)