Wirth's Pascal-P compiler of 1974(?) used the same idea, also in aid of a highly portable compiler. I have never been able to find out whether this was an independent invention, or whether Wirth was influenced by Richards's work.
Of course, the JVM and CLR are descendents of this, but they build a very complex structure on the basic idea. Writing an implementation of one of these virtual machines is not for the faint of heart.
So I think Bedrock can be very useful as a compiler target, if nothing else. However, I must agree with some of the other commenters that the 64KiB address space makes it very much of a niche tool. Come up with a 32-bit variant that's not much more complicated, and I think you have a winner.
https://benbridle.com/projects/bedrock/user-manual/memory-de...
1. https://academic.oup.com/comjnl/article-abstract/15/2/117/35...
2. https://academic.oup.com/comjnl/article-abstract/15/3/195/48...
3. https://www.microsoft.com/en-us/research/publication/an-open...
That said, I don't see a completely independent redefinition of IEEE 754 in the 226-page https://webassembly.github.io/spec/core/_download/WebAssembl.... In §4.3.3 it does restrict IEEE 754, for example requiring a particular rounding mode, and it defines NaN propagation details that the IEEE 754 spec leaves open-ended IIRC, and it does define some things redundantly to IEEE 754, such as addition and square roots and so on. But it doesn't, for example, describe the binary representation of floating-point numbers at all, even though they can be stored in linear memory and in modules (it just refers you to the IEEE spec), nor permit decimal floating point. §4.3.3 only runs from p. 74 to p. 87, so it would be hard for it to independently define all of IEEE 754.
> Programs written for Bedrock can run on any computer system, so long as a Bedrock emulator has been implemented for that system.
Isn't that true of any program? As long as the language that the program is written in is implemented on the system, any (valid?) program in that language will run on that system?
In practice, this is not always so straightforward, especially as you move closer to machine-level details or consider compiled binaries.
Many compiled programs are built for a specific architecture (x86, ARM, etc.). They won’t run on a different architecture unless you provide either: A cross-compiler (to generate new native code for that architecture), or an emulator (which mimics the old architecture on the new one)
The purpose of Bedrock was to make a system that is easy to implement on as many computer systems as possible. I've got plans to make a working system from a 64KB RAM chip and a $2 PIC12F1572 8-bit microcontroller (2K memory, 6mW power, 8 pins), just to see how far down I can take it.
how does that work?
You set up a palette of 16 colours, then write 0-15 to the coordinates where you want to set a pixel, but you can also choose between an overlapping foreground and background layer (colour 0 on the foreground layer is transparent).
I guess it's no more weird than some hardware designs from the 80's...
4 bits : color A
4 bits : color B
8 bits : select A or B for the first 8 pixels of the cell
The last Pixel is always color A. You can independently change all pixels in the cell because changing the last pixel on its own can be done by swapping A and B and inverting the second byte.In hindsight I don't think there was much advantage to the last bit being the odd one out. The code for setting individual pixels in a cell was pretty custom anyway. If I were to do it again, I'd place the color A pixel in the center.
And I do find myself working on a memory constrained device again, so perhaps I'll be giving it a go.
So each pixel has a colour on the foreground layer and a colour on the background layer, and will be drawn as one or the other. Normally the foreground colour of the pixel will be the colour used, but if the foreground colour is palette colour 0 (treated as transparent), the background colour will be used instead.
Couldn't have done it without you
But where are the source codes?
For the meantime though, I uploaded the source code for each of the snake [1], keyboard [2], and system information [3] programs for you or anyone else here to have a look at. Each one is a single source code file with library macros and functions baked in, so you can run `br asm snake-full.brc | br -z` to assemble and run them.
[0] https://benbridle.com/projects/bedrock/example-microwave-clo...
[1] https://benbridle.com/share/snake-full.brc
The source for the examples and assembler/emulator is also there, follow the links.
I mean, if you take a look at this page: https://benbridle.com/projects/bedrock/bedrock-pc.html
"To assemble a source code file program.brc and save the result as the program program.br, run the command..."
Where are the brc files?
I had thought it could have a use in producing tiny visual apps. I am still somewhat bitter from when I found a volume control that used 3MB on a machine with 256MB total.
It seems you can change the shape of the display, which I like, although I don't really understand the documentation text
>Writing to this port group will perform an atomic write, requesting on commit that the width of the screen be locked and changed to the value written.
Locked and changed?
You also seem to be using double to refer to two bytes, is that correct? If so, I would recommend something that won't confuse people so much. Word is a common nomenclature for a 16 bit value, although it does share the space with the concept of machine words.
And of course to use it for a lot of things it would have to be able to talk to the outside world. A simplified version of what Deno does for allowing such capabilities could allow that. In the terms of Bedrock it would be easiest to have a individual device for each permission that you wanted to supply and have the host environment optionally provide them. I'd put the remote bytestream into it's own device to enable it that way.
That could do with some better wording. Normally the user can freely drag-resize the window, but after the program sets the width or height then the user will be unable to resize that axis. This is for, say, a list program where the screen contents would have a fixed width but a dynamic height, so you'd want to keep the height resizable (unlocked).
> You also seem to be using double to refer to two bytes
Double does mean a 16-bit value, yeah, there's a list of definitions on the main page of the user manual and specification. Short tends to be the expected name for a 16-bit value (from C et al.), but it doesn't make much sense for a short to be the wider of two values. I briefly considered word, but the definition is too broad, with a byte also being a type of word. Double felt like the most intuitive name, because it's double the width of a byte. There weren't really any other decent candidates.
> a individual device for each permission that you wanted to supply and have the host environment optionally provide them
That's more or less the plan, only with a bit more granularity depending on the implementation, so that you can, say, allow reading files but forbid writing or deleting.
But I can see why as every interpreted language can be "fantasy console" on itself.
https://github.com/antirez/load81
I've fantasised about turning LOAD81 into a much more full-featured development/execution environment for years, and have done a fair bit of work on extending it to support other things such as joystick devices, an internal sound synthesizer based on sfxr, and so on .. one of these days I'll get back to it ..
One of the big differences from Uxn is the introduction of undefined behavior; by design, you can break it, unlike Stanislav's legos. So presumably Bedrock programs, like C programs, will do different things on different implementations of the system. That's not fatal to portability, obviously, just extra write-once-debug-everywhere work.
In particular, you can be sure that if people build implementations that don't detect those situations, people who test their code on those implementations will ship code that triggers them. It's pretty easy to underflow a stack at startup if you don't use whatever you popped off of it for anything important, unless the thing you happen to be overwriting is important and/or gets written to later by another means. Limited stack overflow is less common but does occasionally happen.
What typically happens in situations like this is that new implementations have to copy the particular handling of supposedly undefined behavior that the most popular implementation of the platform happened to have. But because it isn't documented, or maybe even intentional, they have to reverse engineer it. It's often much more complex than anything that anyone would have come up with on purpose. This all strikes me as a massive waste of time, but maybe it's an acceptable tradeoff for squeezing that last 30% of performance out of your hardware, a consideration that isn't relevant here.
In the days before Valgrind we would often find new array bounds errors when we ported a C or C++ program to a new platform, and Tony Hoare tells us this sort of thing has been ubiquitous since the 60s. It's hard to avoid. Endianness used to be a pitfall, too, and Valgrind can't detect that, but then all the big-endian architectures died out. The C standards documents and good books were always clear on how to avoid the problems, but often people didn't understand, so they just did whatever seemed to work.
If you want write-once-run-anywhere instead of write-once-debug-everywhere you have to get rid of undefined behavior. That's why Uxn doesn't have any.
I hope you stick with this!
I've got plans for tooling in the future that will make Bedrock more accessible to people who are learning to program, like a high-level language that runs on Bedrock and a graphical debugger for visually clicking around and changing the internal state as your program runs.
Can you say more? I really love this idea but can’t think of any practical use case with 65k of memory. What programs are you now more easily maintaining with Bedrock? To what end?
It's true that 64KB is pretty small in modern terms, but it feels massive when you're writing programs for Bedrock, and the interfaces exposed by Bedrock for accessing files and drawing to the screen and the likes make for very compact programs.
Presumably Java would also be pretty tiny if we wrote it in bytecode instead of higher lever Java.
Which means implementations also have to be correspondingly complicated. You have to handle quite a few different primitive data types each with their own opcodes, class hierarchies, method resolution (including overloading), a "constant pool" per class, garbage collection, exception handling, ...
I would expect a minimal JVM that can actually run real code generated by a Java compiler to require at least 10x as much code as a minimal Bedrock VM, and probably closer to 100x.
There was a video I saw a couple of years back that was showcasing a cellular programming model, where each cell in a two dimensional grid performed an operation on values received from its neighbours. Values would move into one side of a cell and out the other every tick, something like Orca (by 100 rabbits), so the whole thing could be parallelised on the cell level very easily.
Then make all the old school IO apis and rendering engine around it similar to pico 8 or bedrock.
The UI is a bit Similar to shader toy I guess.
In what sense does a virtual machine instruction set architecture with no hardware implementation have a "data path" separate from its arithmetic size? You seem to be using the term in a nonstandard way, which is fine, but I cannot guess what it is.
By your other criteria, the (uncontroversially "16-bit") 8088 would be an 8-bit computer, except that it had a 20-bit address space.
For example, the spec says, "Reading a double from program memory will read the high byte of the double from the given address and the low byte from the following address," but I'd think that generally you'd want the implementation to work by reading the whole 16-bit word at once and then byte-swapping it if necessary, because that would usually be faster, and there's no way for the program to tell if it's doing that, unless reading from the first byte has a side effect that changes the contents of the second byte or otherwise depends on whether you were reading the second byte at the same time.
(Of course if you have a "double" that crosses the boundaries of your memory subsystem's word, you have to fall back to two successive word reads, but that happens transparently on amd64 CPUs.)