> URIs as namespace paths allowing access to system resources both locally and on the network without mounting or unmounting anything
This is such an attractive idea, and I'm gonna give it a try just because I want something with this idea to succeed. Seems the project has many other great ideas too, like the modular kernel where implementations can be switched out. Gonna be interesting to see where it goes! Good luck author/team :)
Edit: This part scares me a bit though: "Graphics Stack: compositing in-kernel", but I'm not sure if it scares me because I don't understand those parts deeply enough. Isn't this potentially a huge hole security wise? Maybe the capability-based security model prevents it from being a big issue, again I'm not sure because I don't think I understand it deeply or as a whole enough.
But seriously a lot of the design decisions Linux and other Unix like systems makes are horrible and poorly bolted on to a design from the 70s that aged very poorly. One of my goals with this project is to highlight that by showing how system with a more modern design derived from the metric ton of OS research that has been done since the 70s can be far better and show just how poorly designed and put together the million and one Unix clones actually are no matter how much lipstick Unix diehards try to put on that pig.
Read more about it here: https://wiki.minix3.org/doku.php?id=releases:3.2.0:developer...
> In Minix as a microkernel, device drivers are separate programs which send and receive message to communicate with the other operating system components. Device drivers, like any other program, may contain bugs and could crash at any point in time. The Reincarnation server will attempt to restart device drivers when it notices they are abruptly killed by the kernel due to a crash, or in our case when they exit(2) unexpectedly. You can see the Reincarnation Server in the process list as rs, if you use the ps(1) command. The Reincarnation Server sends keep-a-live messages to each running device driver on the system periodically, to ensure they are still responsible and not i.e. stuck in an infinite loop.
The point is that when failures do occur, they can be isolated and recovered from without compromising system stability. In a monolithic kernel, a faulty driver can crash the entire system; in a microkernel design, it can be restarted independently, preserving uptime and isolating the fault domain.
Hardware glitches, transient race conditions, and unforeseen edge cases are unavoidable at scale. A microkernel architecture treats these as recoverable events rather than fatal ones.
This is conceptually similar to how the BEAM VM handles supervision in Erlang and Elixir; processes are cheap and disposable, and supervisors ensure that the system as a whole remains consistent even when individual components fail. The same reasoning applies in OS design: minimizing the blast radius of a failure is often more valuable than trying to prevent every possible fault.
In practice, the "driver resurrection" model makes sense in environments where high availability and fault isolation are critical, such as embedded systems, aerospace, and critical infrastructure. It's the same philosophy that systems like seL4 and QNX goes by.
Do you understand now?
I was literally talking about Microsoft moving the compositor that was inside the kernel in their old Windows 9x kernel architecture to outside the kernel in Windows NT.
That literally every other kernel (OSS and comercial, Unix and not) does this separation suggests this is a generally accepted good security practice.
I’m not aware of any kernel research that alters the fundamental fact that having compositing in-kernel compositing is a big security risk surface area and the OS you are proposing isn’t even pure Rust - it’s got C and assembly and unsafe Rust thrown in which suggests there’s a non trivial attack surface area that isn’t mitigated architecturally - AFAIK capability security won’t help here with a monolithic design and you need a microkernel design to separate concerns and blast areas to make the capabilities mean anything so that an exploit in one piece of the kernel can’t be a launching pad to broader exploits. This is also ignoring that even safe Rust has potential for exploit since there are compiler bugs around soundness in terms of generated code so even if you could write pure safe Rust code (which you can’t at the OS level) a monolithic kernel would present issues.
TLDR: claiming that there’s decades of OS research to improve on that existing kernels don’t take advantage of is fair. Claiming that a monolithic kernel doesn’t suffer architectural security challenges, particularly with respect to compositing in-kernel is a bold statement that would be better supported by explaining how that research solves the security risks rather than launching an ad hominem attack against a different kernel family than I even mentioned is just a weird defensive reaction.
There's no possible way that data which will only ever be read as raw pixel data, Z tested, alpha blended, and then copied to a framebuffer can compromise security or allow any unauthorized code to run at kernel privilege level. It's impossible. These memory regions are never mapped as executable and we use CPU features to prevent the kernel from ever executing or even being able to access pages that are mapped as userspace pages and not explicitly mapped as shared memory with the kernel i.e. double mapped into the higher half. So there's literally an MMU preventing in kernel compositing from even possibly being a security issue.
* you try to do GPU compositing things get more complicated. You mention you have no interest in GPU compositing but that’s quite rare
* a lot of such exploits come from confusing the kernel about the buffer to use as input/output and then all sorts of mayhem ensues (eg giving it an input buffer from a differ process so the kernel renders to the screen a crypto key in another process or arranging it to clobber some kernel buffers)
* stability - a bug in the compositor panicks the entire machine instead of gracefully restarting the compositor.
But ultimately you’re the one claiming you’re the domain expert. You should be explaining to me why other OSes made the choices they did and why they’re no longer relevant.
(You don't have to recompile the kernel if you put all the device drivers in it, just keep the object files around and relink it.)
The plan is to hand out panes which are just memory buffers to which applications write pixel data as they would on a framebuffer then when the kernel goes to actually refresh the display it composites any visible panes onto the back buffer and then swaps buffers. There is nothing unsafe about that any more so than any other use of shared memory regions between the kernel and userspace and those are quite prolific in existing popular OSes.
If anything the Unix display server nonsense is overly convoluted and far worse security wise.
From there each application can draw its own GUI and respond to events that happen in its panes like a mouse button down event while the cursor is at some coordinates and so forth using event capabilities. What any event or the contents of a pane mean to the application doesn't matter to the OS and the application has full control over all of its resources and its execution environment with the exception of not being allowed to do anything that could harm any other part of the system outside its own process abstraction. That's my rationale for why the display system and input events should work that way. Plus it helps latency to keep all of that in the kernel especially since we're doing all the rendering on the CPU and are thus bottlenecked by the CPU's memory bus having way lower throughput compared to that of a discrete GPU. But that's the way it has to be since there are basically no GPUs out there with full publicly available hardware documentation as far as I know and believe me I've looked far and wide and asked around. Eventually I'll want to port Mesa because redoing all the work develop something that complex and huge just isn't pragmatic.
Most of these systems came with utilities to partially automate the process, some kind of config file to drive it, Netware 2.x even had TUI menuing apps (ELSGEN, NETGEN) to assist in it
The sys admin scripts would even relink just to merely change the ip address of the nic! (I no longer remember the details, but I think I eventually dug under the hood and figured out how you could edit a couple files and merely reboot without actually relinking a new kernel. But if you only followed the normal directions in the manual, you would use scoadmin and it would relink and reboot.) And this is not because SCO sux. Sure they did, but that was actually more or less normal and not part of why they sucked.
Change anything about which drives are connected to which scsi hosts on which scsi ids? fuggeddabouddit. Not only relink and reboot, but also pray and have a bootable floppy and a cheat sheet of boot: parameters ready.
Also the only approach for systems where people advocate for static linking everything, yet another reason why dynamic loading became a thing.
Incremental compilation means you don't have to recompile everything just compile the new driver as a library and relink the kernel and you're done. Keep the prior n number of working ones around in case the new one doesn't work.
The intro page is currently useless.
You could roughly emulate it on Unix by assuming every filename starting /scheme/bar/ is a bar-type (special) file, but nothing stops you creating (and you'd necessarily have) 'files' of any type outside that. In Redox, everything has that scheme prefix describing its type (and if omitted, it's implicitly /scheme/file/).
- Multi-user and server-oriented permissions system.
- Incompatible ABIs
- File-based everything; leads to scattered state that gets messy over time.
- Package managers and compiling-from-source instead of distributing runnable applications directly.
- Dependence on CLI, and steep learning curve.
If you're OK with those, cool! I think we should have more options.Reactos if you need something to replace windows
Implementing support for docker on these operating systems could give them the life you are looking for
Did you know the Go language supports Plan9? You can create a binary from any system using GOOS=plan9 with amd64 and i386 supported. You might need to disable CGO and use libraries that don't have operating system specifics though. You can even bootstrap Go from it provided you have the SDK.
Incidentally 9Front is a modern fork of Plan9.
Docker tries to partially address this, right?
> Dependence on CLI, and steep learning curve.
I think this is partially eased by LLMs.
Docker is a good way of turning a 2kb shell script into a 400mb container. It's not a solution.
Flatpak would be a better example.
There are many great ideas in operating systems, programming languages, and other systems that have been developed in the fast 30 years, but these ideas need to work with existing infrastructure due to costs, network effects, and other important factors.
What is interesting is how some of these features do get picked up by the mainstream computing ecosystem. Rust is one of the biggest breakthroughs in systems programming in decades, bringing together research in linear types and memory safety in a form that has resonated with a lot of systems programmers who tend to resist typical languages from the PL community. Some ideas from Plan 9, such as 9P, have made their way into contemporary systems. Features that were once the domain of Lisp have made their ways into contemporary programming languages, such as anonymous functions.
I think it would be cool if there were some book or blog that taught “alternate universe computing”: the ideas of research systems during the past few decades that didn’t become dominant but have very important lessons that people working on today’s systems can apply. A lot of what I know about research systems comes from graduate school, working in research environments, and reading sites like Hacker News. It would be cool if this information were more widely disseminated.
This and other dirt is on any YouTube video about the history/demise of alternative computing platforms/OSes.
Your complaint is more pointless than what you're complaining about.
What's that parenthetical mean?
Specifically, "Users may link this kernel with closed-source binary drivers, including static libraries, for personal, internal, or evaluation use without being required to disclose the source code of the proprietary driver.".
I wish there was a social stigma in Open Source/Free Software to doing anything other than just picking a bog standard license.
I mean, we have a social stigma even for OS developers about rolling your own crypto primitives. Even though it's the same very general domain, we know from experience that someone who isn't an active, experienced cryptographer would have close to a zero percent chance of getting it right.
If that's true, then it's even less likely that a programmer is going to make legally competent (or even legally relevant) decisions when writing their own open source compatible license, or modifying an existing license.
I guess technically the "clarification" of a bog standard license is outside of my critique. Even so, their clarification is shoe-horned right there in a parenthetical next to the "License" heading, making me itchy... :)
Many people don't know that, hence the clarification note.
Also to be clear I am not a lawyer and nothing I say constitutes any form of legal advice.
More options (and thus) competition is very healthy.
SerenityOS is written in C++.
I'd love some kind of meta-language that is easy to read and write, easy to maintain - but fast. C, C++, Rust etc... are not that easy to read, write and maintain.
easy to understand, maintain -> computer does more work for you to "figure things out" in a way that simply can't be optimal under al conditions.
TLDR: what you're asking for isn't really possible without some form of AGI
by that same definition, rust is pretty easy to maintain. I won't say its easy to write though.
Maybe an LLM agent posting crap at random? lol
This could be done at every level: the operating system, the browser, websites..
So if you don't care about the website knowing it's the same person, instead of having multiple user accounts on HN, Reddit, you could log into a single account, then choose from a set of different usernames each with their own post history, karma, etc.
If you want to have different usernames on each website, switch the browser persona.
At the OS level, people could have different "decoy" personas if they're at risk of state/partner spying or wrench-based decryption, and so on.