The later part says the opposite - that the original implementation had "No ability to save progress" and that this is new in the C++ implementation.
I can't help but wonder (also due to other language features) if the author ran the article through an AI to 'tidy up' before posting... because I've often found ChatGPT etc. to introduce changes in meaning like this, rather than just rewriting. This is not to dismiss either the article or the power of LLM's, just a curious observation :)
It seemed like a neat trick at the time. There was also a crude CRUD database that worked the same way, retaining up to 50 names and phone numbers.
Actually that disk had a lot of really cool programs, now that I think about it. A biorhythm plotter, Woz's "Little Brick Out" Breakout clone, and a few other demos by luminaries like Bruce Tognazzini and Bob Bishop. And of course, ELIZA, the mother of all LLMs... only it was called FREUD for some reason.
Another portion of the article says more explicitly:
Limitations:
Maximum number of nodes: 200.
The structure is stored only in memory (no disk saving).
If I recall, there is a way to significantly reduce the number of matchboxes needed by taking advantage of mirrored board conditions. Somewhere I have a Perl script that generated HTML for all the matchbox covers and possible next states.
http://rodneybrooks.com/forai-machine-learning-explained/
(Full disclosure: Donald Michie, inventor of MENACE and RL was my grand-advisor :beams with pride:).
In practice many of us in the 'Nouvelle AI" movement that countered the ruling Symbolic AI paradigm (GOFAI was the commonn slur) had at least one foot in the ML, EA. And alife spaces as once you abandon symbolics straight programming becomes awkward fast.
And what happens to those who don't remember the past? Ask George Santayana.
Looking it up, apparently it was a type-in in the programmling book that came with the computer.
https://spectrumcomputing.co.uk/entry/15157/ZX-Spectrum/Pang...
That's the book my (blind) dad got me to read to him which taught me to program!
I can only assume the rest of the article is also AI-generated, with a similar attention to detail. Tab closed without reading.
The other major issue is that it isn't machine learning—it's a classic example of an expert system, even if there is a bit of a gray area around whether a binary decision tree qualifies as ML.
The worst part, of course, is that it takes less time to slap "GUESS THE ANIMAL" on a stock image of a glass terminal than it does to prompt a diffusion model to generate the same thing with passable fidelity... and it still wouldn't be an accurate depiction of what the program actually did.
Er. To my understanding of course.
It would be a prank--LLM writing about AI, badly.
But the Medium account behind it has only articles like this.
Here's another example of doing the same thing in Prolog from Markus Triska's website, that also identifies animals by asking the user. Title "Expert Systems in Prolog":
https://www.metalevel.at/prolog/expertsystems
And here's another one that identifies birds, on the Amzi Prolog website:
https://amzi.com/ExpertSystemsInProlog/02usingprolog.php
_____________
[1] Reasoning. Induction and deduction are forms of reasoning. You know, the thing all the LLM folks don't know how to define? We've been able to do that since before the '80s and decision trees are just one way to do it. Learning a decision tree, e.g. with a decision tree learner like ID3 or C4.5 is an inductive task. Using a (learned or hand-coded) decision tree to make a decision is a deductive one.
But hey, we don't have a definition of reasoning. Oooh nooo.
The reason is that the way decision trees, well, make decisions, is the way that expert systems of the '80s made decisions. The first expert system was called DENDRAL (capitals mandatory), derived from the Greek word for "tree", "dendron" (also see: "dendrite"; little tree). DENDRAL and similar systems implement deductive inference procedures that traverse a tree (either bottom-up or top-down, with backward or forward chaining). In traditional expert systems the edges in the tree are production rules of the form IF-THEN-ELSE and the nodes are intermediary goals of what is essentially a proof of a propositional logic formula (the decision tree; it's a boolean formula).
Expert systems' Achilles heel was the so-called "knowledge acquisition bottleneck" which refers, in short, to the difficulty of constructing and maintaining a huge rule-base as needed for an expert system that can do something more interesting than play 20 questions like the animal identification system above. Because of this difficulty, expert systems never quite fulfilled their promises and they suffered from the brittleness that is one of the first thing anyone hears about expert systems today.
And what did people come up with in order to overcome the knowledge acquisition bottleneck? Why, machine learning, of course. The early days of machine learning as a field --remember, we're back in the '80s, a bit before the AI winter hits-- were exactly systems that tried to learn the IF-THEN-ELSE rules for expert systems' rule bases.
Decision tree learners, like ID3 and C4.5 by J. Ross Quinlan, and similar algorithms, that were wildly popular even well after the ImageNet moment of deep neural nets come from that era. Instead of learning only the rules though, decision tree learners, learn an entire decision tree; in short, an entire expert system, from top to bottom.
Rebalancing is not possible not due to code restrictions but due to the nature of the game. How could the program know what new labels to invent for the inner nodes of the rebalanced tree? Unlike integers, animals are not totally ordered.
But I did find this BASIC version for Atari computers, from Antic magazine in April 1986. The article specifies:
"It's vital to save your computer's knowledge base to disk or tape - this knowledge is what makes the game fun. To save, type [CONTROL] [S] and you'll be prompted for a device to save the file to, and a filename. Cassette users should simply type C: at the prompt. Disk drive owners type D: and then the filename. To LOAD a previously saved knowledge base, press [CONTROL] [L] and respond to the prompt as explained just above."
The .BAS file at the end doesn't seem to be a plain-text listing, it's probably a saved Atari file. But viewing it in a hex editor, I see strings in the binary file like "Shall I SAVE this data?"
Consider the book where this Animal appeared: David H Ahl was aiming at a wide audience. The content was previously in Creative Computing Magazine.
It was up to the reader to adapt the BASIC programs to work on their machine.
The robot illustrations are the unforgettable charm of that book. It's on the net.
I think ZX Spectrum was what I saw most often, and there would be a couple of things that just didn’t work in CP/M basic, gwbasic or whatever I was using at the time - I imagine file manipulation was avoided for that reason also.
Speaking as someone who was there at the time: no, it wasn't. Opening a file, reading/writing data to it, and closing it was a straightforward set of operations on the C64 I had. The I/O was slower, of course.