The post alludes to a quirk that causes people not to get sicker while waiting; but it says they still get hungry, right? So you can't wait 14272 years there for any purpose unless you have 14272 years' worth of food, right?
IIUC, the blogger goes on to patch the game so that you don't get hungry either. But if patching the game is fair play, then what's the point of mentioning the original no-worsening-sickness quirk?
It kinda feels like asking "Can you win Oregon Trail by having Gandalf's eagles fly you to Willamette?" and then patching the game so the answer is "yes." Like, what's the reason I should care about that particular initial question? care so badly that I'd accept cheating as an interesting answer?
Just a bit of fun.
edit: And the answer to "Why THAT river?" is simply that it's the last river in the game, and when I was hoping to complete a run without any modding, I thought it might be possible to play normally, get to the final river, wait 15,000 years, and then try to limp my decrepit deathwagon to the finish line before we all expired. This proved impossible, sadly.
I also was a little confused by the goal, but that clears it up.
From Wikipedia: "The wagons were stopped at The Dalles, Oregon, by the lack of a road around Mount Hood. The wagons had to be disassembled and floated down the treacherous Columbia River and the animals herded over the rough Lolo trail to get by Mt. Hood."
https://en.wikipedia.org/wiki/Oregon_Trail#Great_Migration_o...
[1]: https://old.reddit.com/r/AskHistorians/comments/6ouy10/june_...
>"Oit came all the old clothes we could spare," he wrote later. "Out came the tar buckets, old chisels and broken knives. They stuffed scrap cloth into creaks and crannies in the wagon and tarred over them.
Something that was very common with BASIC interpreters but still baffling is how they were running on machines with extremely limited memory and fairly limited CPU time, but for some reason decided not to make integer types available to programmers. Every number you stored was a massive floating point thing that ate memory like crazy and took forever for the wimpy 8 bit CPU with no FPU to do any work on. It's like they were going out of their way to make BASIC as slow as possible. It probably would have been faster and more memory efficient if all numbers were BCD strings.
Three main types of variables are supported in this version of
basic: they are integer, real and string.
integer real string
example 346 9.847 “HELLO”
typical variable A% A A$
names SIZE% SIZE SIZE$
maximum size 2,147,483,647 1.7¥1038 255 characters
accuracy 1 digit 9 sig figs —
stored in 32 bits 40 bits ASCII values
A%, A, and A$ are 3 different variables of different types.Edit:
Poor Andy :-(
https://tvtropes.org/pmwiki/pmwiki.php/Trivia/TheOregonTrail
Can you expand upon this? All of the research I've done suggest that, not only was it was possible to use integer math in Basic for the Apple II, there are versions of BASIC that only support integers.
[0] https://www.10zenmonkeys.com/2007/07/03/steve-wozniak-v-step...
I'm not clear on which Apple II ROMs (INTEGER BASIC or Applesoft ROM, or both) he's referring to.
"All operations were done in floating point. On the GE-225 and GE-235, this produced a precision of about 30 bits (roughly ten digits) with a base-2 exponent range of -256 to +255.[49]"
Speaking of, John G. Kemeny's book "Man and the Computer" is a fantastic read, introducing what computers are, how time sharing works, and the thinking behind the design of BASIC.
The last thing they wanted was someone making their very first app and it behaves like:
Please enter your name: John Doe
Please enter how much money you make every day: 80.95
Congratulations John Doe you made $400 this week!
This started with $ for strings in Dartmouth BASIC (when it introduced strings; the first edition didn't have them), and then other BASIC implementations gradually added new suffixes. I'm not sure when % and # showed up specifically, but it was already there in Altair BASIC, and thence spread to its descendants, so it was well-established by 1980s.
That said, it probably has something to do with earliest 5-bit and 6-bit text encodings that were very constrained wrt control characters, and often originating from punch cards where fixed-length or length-prefixed (https://en.wikipedia.org/wiki/Hollerith_constant) strings were more common. E.g. DEC SIXBIT didn't even have NUL: https://en.wikipedia.org/wiki/Six-bit_character_code
BASIC was well-intention to make programming easy, so ordinary people in non-technical fields, students, so people who weren't "programmers" could grasp. In order to make it easy, you better not try to scare adopters with concepts like int vs float and maximum number size and overflow, etc. The ordinary person's concept of a number fits in what computers call a float. You make a good point though that BCD strings might have done the trick better as a one-size fits all number format that might have been faster.
BASIC also wasn't intended for computationally intense things like serious number crunching, which back in the day usually was done in assembly anyway. The latency to perform arithmetic on a few floats (which is what your typical basic program deals with) is still basically instantaneous from the user's perspective even on a 1 MHz 8-bit CPU.
Would have been interesting to see a version of BASIC that encoded numbers as 4 bit BCD strings. Compared to the normal 40 bit floating point format you would save memory in almost every case, and I bet the math would be just as fast or faster than the floating point math in most cases as well. The 4 bit BCD alphabet would be the numbers 0-9, as well as -, ., E, and a terminator and a coupld of open numbers if you can think of something useful. Maybe an 'o' prefix for octal and a 'b' for binary?
If you're writing business software, you'll need to support decimals for currency-related calculations. Tax and interest rates also require decimal values. So floating point helped a lot.
When the 8-bit microcomputers went mainstream, (Apple II, Commodore PET, TRS-80), graphics got more popular - sin(), cos(), and other trig functions are popular and their return values are never normally expressed as integer values.
Sure, most would never write a fast arcade-like game in BASIC, but as a code trial playground, turnaround time was relatively quick.
Especially when doing financial calculations you do not want to use floating point but fix point O_o
With signed 16-bit integers (which Apple Integer Basic provided), you've got a range of 32767 to -32768 (wikipedia says Apple Integer Basic couldn't display -32768). But if do the naive fixed-point using 16-bit ints, you'll have a range of 327.67 to -327.68, assuming 2-digit for decimals.
16-bit integers didn't have enough precision for many of those 1970s/1980s use cases.
yes, floating-point math has problems. but they are well-known problems - those corner cases were well-known back then.
They were there, you had to append % to the variable name to get it (e.g. A% or B%, similar to $ for strings). But integers were not the "default."
BASIC is all about letting you pull stuff out of thin air. No pre-declaring variables needed, or even arrays (simply using an array automatically DIM's it for 10 elements if you don't earlier DIM it yourself). Integer variables on BASIC were 16-bit signed so you couldn't go higher than 32767 on them. But if you are going to use your $500 home computer in 1980 as a fancy calculator, just learning about this newfangled computer and programming thing, that's too limiting.
I do remember reading some stuff on the C64 that its BASIC converted everything to floating point anyway when evaluating expressions, so using integer variables was actually slower. This also includes literals. It was actually faster to define a float variable as 0--e.g. N0=0--and use N0 in your code instead of the literal 0.
Floats were 5 bytes in the early 80's Microsoft BASICs, honestly not "massive" unless you did a large array of them. The later IBM BASICs did have a "double precision" float that was 12 bytes maybe?
> It probably would have been faster and more memory efficient if all numbers were BCD strings.
I wouldn't be surprised if Mr. Gates seriously considered that during the making of Microsoft BASIC in the late 70's as it makes it easy for currency calculations to be accurate.
default, a normal variable like N=10, is a signed float that requires 8 bytes
optional, add ! suffix, N!=10, is a signed float that requires 4 bytes
optional, add % suffix, N%=10, is a signed int that requires 2 bytes
And that's all the numbers. There are strings which use one byte per byte, but you have to call a function to convert a single byte of a string to it's numerical value.
An unsigned 8-bit int would be very welcome on that and any similar platform. But the best you can get is a signed 16-bit int, and you have to double the length of your variable name all through the source to even get that. Annoying.
So the main benefit was for saving space with an array of integers.
To be fair, JavaScript suffers from the same laziness :)
When he finally got around to doing it, he discovered two issues – Integer BASIC was very difficult to modify, because there was never any source code. He didn't write it in assembly, because at the time he wrote it he didn't yet have an assembler, so he hand assembled it into machine code as he worked on it. Meanwhile, Jobs had talked to Gates (without telling him) and signed a deal to license Microsoft Basic. Microsoft Basic already had the desired floating point support, and whatever Integer BASIC features it lacked (primarily graphics) were much easier to add given it had assembly code source.
https://en.wikipedia.org/wiki/Integer_BASIC#History
I was thinking about this the other day, I wonder if anyone has ever tried finishing off what Woz never did, and adding the floating point support to Integer BASIC? The whole "lacking source" thing shouldn't be an issue any more, because you can find disassemblies of it with extensive comments added, and I assume they reassemble back to the same code.
Unrelated, my monitor and my eyeballs hates the moire patterns developed by the article's background image at 100% zoom - there's a painful flicker effect. Reader mode ruins the syntax highlighting and code formatting. Fortunately, zooming in or out mostly fixes it.
just because the function you're implementing used single-character variables to render an equation in latex, doesn't mean you have to do it that way in the code.
a particular peeve was when they make variables for indexed values named`x_i` instead of just having an array `x` and accessing the ith element as `x[i]`
https://www.jsoftware.com/ioj/iojATW.htm
i tried this style for a minute. there are some benefits, and i'll probably continue going for code density in some ways, but way less extreme
there's a tradeoff between how quickly you can ramp up on a project, and how efficiently you can think/communicate once you're loaded up.
(and, in the case of arthur whitney's style, probably some human diversity of skills/abilities. related: i've thought for a while that if i started getting peripheral blindness, i'd probably shorten my variable names; i've heard some blind people describe reading a book like they're reading through a straw)
(page 7)
https://mirrors.apple2.org.za/Apple%20II%20Documentation%20P...
Maybe "PF" is bad in one function but if it's the canonical name across the program, it's not so bad.
(It sounds like there was a justified reason for that here, though -- the variable names are not minimized during compilation to disk.)
Great read on how to actually hack. Takes you through the walls he hits and then how by hitting that wall it "opens up a new vector of attack"
Applesoft has a BASIC decompiler built in, it's called "break the program and type LIST". Maybe Oregon Trail did something to obscure this? I know there were ways to make that stop working.
"bytecode" and "virtual machine", no, no, no. That's not the path to enlightenment...
in this case, print debugging, is your best bet.
This surprised me for some reason, I guess it's been 30-some years but I remember my adventures in Apple II BASIC not running that quickly, but maybe Oregon Trail's graphics was simpler than I remember
I guess I just assumed any "commercial" Apple II games were written in assembly, but perhaps the actions scenes had machine code mixed in with the BASIC code.
So - I'm guessing game logic of MECC Oregon was in Basic with some assembly routines to re-draw the screen. BTW original Oregon Trail was also 100% basic and a PITA to read. You're really getting to the edges of what applesoft basic is practically capable of with games like Akalabeth and Oregon
This is so completely wrong I question the person's ability to understand what's happening in the emulator.
Also, that LDA instruction reads the 2-byte pointer from the memory location, adds Y, and loads the accumulator from the resulting memory position. IIRC, the address + Y can't cross a page - Y is added to the least significant byte without a carry to the MSB.
> and the program is stored as some sort of bytecode
We call it "tokenized". Most BASIC interpreters did that to save space and to speed parsing the code (it makes zero sense to store the bytes of P, R, I, N, and T when you can store a single token for "PRINT".
I'd argue that it's not completely wrong in the context of a BASIC program; other addressing modes exist, but I don't think the BASIC interpreter will use self-modifying code to make LDA absolute work.
> IIRC, the address + Y can't cross a page - Y is added to the least significant byte without a carry to the MSB.
If wikipedia is accurate, the address + Y can cross a page boundary, but it will take an extra cycle --- the processor will read the address + Y % 256 first, and then read address + Y on the next cycle (on a 65C02 the discarded read address will be different). But if you JMP ($12FF), it will read the address from 12FF and 1200 on an 6502 and a 12FF and 1300 on a 65C02 --- that's probably what you're thinking of.
On the 6502, you can absolutely access all 64K of memory space with an LDA instruction.
The other weird thing about the 6502 is "page one", which is always the stack, and is limited in size to 256 bytes. The 256 byte limit can put a damper on your plans for doing recursive code, or even just placing lots of data on the stack.
I've done lots of embedded over the years, and the only other processor I've developed on that has something similar to the 6502 "zero page" memory was the Intel 8051, with it's "direct" and "indirect" memory access modes for the first 128 bytes of volatile memory (data, idata, bdata, xdata, pdata). What a PITA that can be!
There are two LDA instructions (maybe more, I too am about 40 years rusty). One loads from page 0 only and thus saves the time by only needing to read one byte of address, and the other reads two bytes of addresses and can read from all 64k. In latter years you had various bank switching schemes to handle more than 64k, but the CPU knew nothing about how that worked so I'll ignore them. Of course your assembler probably just called both LDA and used other clues to select which but it was a different CPU instruction.
The 6502 was a really sweet little processor.
I like to say if made me feel I had 256 general-purpose registers to play with.
What a strange and thought provoking statement.
― Robert A. Heinlein
But this I got lost in…
> Most of the game is handled by the main program "OREGON TRAIL" (written by John Krenz), which loads in modules like "RIVER.LIB" and keeps access to all the variables.
AppleSoft basic didn’t have the concept of libraries. Were they just POKEing values in memory, loading the next AppleSoft Basic program calling them LIBs and the next one PEEKing to retrieve the values?
I remember a TI home computer, on a counter in a store. I type FOR I = 1 to 10000; NEXT I and RUN
It never finished. I changed it to 1000; still didn't finish. So I changed it FOR I=1 to 10 and PRINT I and it printed 1 and delay and 2 and delay and so on. Took ten seconds to finish.
You'd have load up on ammo like it's a war wagon. And it's likely you'd blow you leg off at some point, yet still somehow die of dysentery.
You could pass the 14,000+ years by hunting... and trading... at some point, all you need to do was trade...
I had not played the CDC-Cyber mainframe version, nor the Apple II version, I started with the Macintosh version, and having passed Econ with flying colors set my sight on 'cooking' the game, but the same 'cook' was available on the Apple ][ and the PC.
I would suspect that in 10,000 years... that Oxen in 500 generations would have been bred for extreme longevity, as well as the humans too.
Finally, the source code was published at May 1978, Creative Computing, page 137. So.... hack the source. ( CDC-Cyber Basic, I believe... ) for which "On CDC Cyber Basic, the size of an integer is typically 16 bits, meaning it can represent whole numbers ranging from -32,768 to 32,767."
XKCD says that's what happened in real life.[1] And that was for the people who made it to Oregon.
Damn, that made me feel really old.
EDIT: I played the 1985 version, I didn't know there was a text adventure.