I'm surprised no one's made a CEEFAX replica for the terminal yet [0]. Their weather page is pretty iconic [1].
[0] There are CEEFAX Emulators online that pull from the BBC RSS feeds to do this.
[1] https://teletextart.co.uk/wp-content/uploads/2016/05/weather...
And even then one needs modern fonts like Viznut's Unscii or GNU Unifont or which cover the necessary code points (or one of the terminal emulators that algorithmically constructs block and line characters, and has been updated for Unicode 13).
* https://github.com/jdebp/unscii/blob/2.1.1f/src/grids.txt#L4...
* https://github.com/jdebp/unscii/blob/2.1.1f/src/grids.txt#L9...
(And I actually remember it being surprisingly watchable, you could follow what was happening in the game even though you couldn't judge stuff like players' ball control or anything like that.)
If it was close to the CEEFAX page then it would be useful as a project. If you included the prompts then it would have educational use for others.
No idea how to pull historical UK weather data to see if it matches :)
But you need to change the font for XTerms. With framebuffers under Linux/BSD you might be able to do the same, but you would need to convert the fonts first and map the chars.
Here you have:
EDIT: this is particularly timely because the UK Met Office has recently announced the retirement of the API I was previously using: https://www.metoffice.gov.uk/services/data/datapoint/datapoi...
The national forecast service (yr.no) is saying it will be sunny and very hot all through the weekend, while wttr reports it will be 16-19 degrees Celcius and rain on saturday.
I wonder what's special about Norway's meteorologists that they have exceptionally good quality data and ability to build and run a useful public service.
Many locals use DWD (German Weather Service).
A lot of the German sailors use dmi.dk (Danish meteorological institute).
A lot of the Danish sailors use yr.no :)
(Bug report - It shows me a full weather forecast even if it doesn't know where I am!)
[snip]
$ curl wttr.in
Weather report: not found
(then shows pretty forecasts anyway)
[/snip]
Edit: Is there a way to show Fahrenheit instead of Celsius? I don't see it in the options https://wttr.in/:help. OH. "u".Edit: for some reason upon trying again coordinates work, first time I tried the same url I kept getting "unknown location"
The very awesome awesome-console-services has more neat tools like this:
https://github.com/chubin/awesome-console-services
My favourite is:
$ nc ticker.bitcointicker.co 10080
.. which is a nice thing to check while waiting for builds ..
And then, there is this wonderful, wonderful thing:
$ curl cheat.sh
Such a great resource when all you've got is a terminal and 15 minutes waiting for those builds ..
Another great one, which I have found very useful for sending myself links across an air gap ..
$ curl qrenco.de/https://news.ycombinator.com/item\?id\=44590971
Okay, one more, because I just can't get enough:
$ curl https://api.lyrics.ovh/v1/depeche-mode/behind-the-wheel
Worth pointing out, maybe, that there is an emacs package, too - more than one, actually, the one I am using (occasionally, at least) is https://github.com/cjennings/emacs-wttrin which is available from melpa.
curl wttr.in/London > london.txt
open -a TextEdit london.txt
Witness the control code garbage.IMHO you should not emit ANSII escape sequences until you at least call isatty, preferably also consult termcap. But also IMHO we should bury the terminals, and build better REPLs and lightweight GUI toolkits.
How exactly do you propose that wttr.in, which is not actually a process running on your machine (but a remote server), call isatty() on your machine?
Or are you suggesting that curl should check isatty() and strip out the control codes? But that would be overstepping curl's responsibilities, wouldn't it? Its job is to faithfully output the response, garbage or not.
That's exactly my point. You can't do that.
$ curl --head -s wttr.in/London | grep Content-Type
Content-Type: text/plain; charset=utf-8
This is not plaintext, this is ANSII garbage. If you're outputting HTML, you set the content type to text/html, so the client can interpret it. But the lack of an associated content type is not the problem, it's the blind assumptions about the capabilities of the client.But you've got a fair point. So thanks!
- curl sees that the standard output is a tty, consults $TERM, termcap, etc
- curl crafts an "Accept:" header, format to be specified
- server sees Accept and responds with appropriately encoded response; e.g. for text/plain it would just output preformatted text
As this is currently NOT a common use case (mostly fun toys, biggest use case is Github printing out a pride flag), the exact content type can be easily iterated on to standardise it.
For example, the most common cases (TERM=xterm or xterm-256color) could be specified in the standard and treated as abbreviations for the complete description of capabilities. The server can have those common cases built-in, but it should also be free to ignore any capabilities it doesn't understand and send out a conservative response. All of these smarts could be a part of a library or framework.
I made this up on the spot, it's not hard, because the entire stack is adequately layered. So just don't break those layers, m'kay?
```
To force plain text, which disables colors:
$ curl wttr.in/?T
```
It's sadly victim of its success and is quite often over quota to its weather API. We should make a paid version that wouldn't have this problem and bring some monetary karma to Igor
No wonder! That works out at about 133-143 requests per user per day. Presumably due to scripts refreshing their data 24/7.
Another solution is just to host it yourself, given the code is open source. No quota worries, and you can always donate to Igor if you feel so inclined (assuming he wants/accepts donations).
Multiple GitHub issues around this have been opened already.
Otherwise pretty neat of course!
$ weather in san francisco, today evening?
about 14C, no rain, cloudy
When I ask for the weather, I want to know exactly what the Met Office says the weather is. Not what an LLM guesses the Met Office might have said, with a non-zero chance of introducing a hallucination.
This habit of inserting LLMs into otherwise deterministic tasks is frustrating. Why take something that can be solved accurately and deterministically (like parsing the Met Office's data) and make it probabilistic, error-prone, and unpredictable with an LLM?
LLMs are appropriate for problems we cannot solve deterministically and accurately. They are not appropriate for problems we can already solve deterministically and accurately.
> $ weather in san francisco, today evening?
To be an example of some free-form written request without any special format. Parsing that input seems like a reasonable job for an LLM, right? Otherwise we will have the typical adventure game problem of “use thumb on button” doesn’t work because it expected “finger,” or whatever.
Nice API though.
In one use case I take 'https://api.met.no/weatherapi/locationforecast/2.0/compact?l...' and push through a jq incantation to format the prognosis for the coming five hours into a neat packaging for terminal viewing, then put that in a watch -n on five minutes. I'm not really interested in the escape sequences and ASCII art.
Printing arbitrary output to most terminal emulators is some security risk (even if pretty much everyone does it). Many suffer from vulnerabilities, both past and present, that can allow specially crafted text to inject commands back into the shell. The issue lies in the complex and often legacy standards for handling control characters and escape sequences.
Even xterm is not entirely immune to these problems and has had security advisories issued in the past.
While this attack surface has received attention from sec-researchers in the past, it's not remotely comparable to the scrutiny applied to web browsers. The ecosystem around terminals generally lacks the massive, continuously-funded bug bounty programs and large-scale, constant fuzzing that browsers are subjected to.
These comments are getting absurd, and are worryingly coming more and more from new accounts. Are you yourself a bot designed to spam communities and hype coding with LLMs?
Though vibe coding doesn't prohibit the human from making the decision on which weather API to use, so of all the criticisms to make about LLM use I don't actually agree with the person you replied to who suggested it has to mean "accepting any random LLM suggestion for a random endpoint".