I liked the bug report "R4: BeOS missing megalomaniacal figurehead to harness and focus developer rage" (:
http://www.catb.org/~esr/writings/cathedral-bazaar/cathedral...
(I personally still don't really like Apple systems and don't use them, but I think it's clear that by the standards of the time OS X squared this circle.)
I don't know that it's really such a binary either/or decision, more of a spectrum of "more cathedral-y" and "more bazaar-y."
Now we only have Windows and Unix like.
I still wonder what would have happened had Jean-Louis Gassée been less greedy and Apple acquired BeOs instead of Next.
I discovered BeOs in 2000 and it seemed to me much more interesting at that moment than either Windows or Linux. Not only it looked and felt better but it introduced other ideas and concepts.
I had hopes it's adoption will increase but it soon withered and died.
I still wonder why we can't do better than Unix and Windows. Unix is 50 years old and Windows is old, too. There should be better concepts out there waiting to be discovered and implemented.
At some point there were many companies, universities, groups and individuals involved in researching and implementing operating systems.
At that point I was following OsNews website daily and each day there were news about some new and exciting developments.
Not anymore.
I miss the days when I read about BeOS, Syllable, AROS, MorphOS, AtheOS, SkyOS, Plan 9, Inferno, Singularity. And there were a ton of interesting kernels, too.
There are:
https://www.raspberrypi.com/news/risc-os-for-raspberry-pi/
https://wiki.sugarlabs.org/go/Installation
https://9p.io/wiki/plan9/Raspberry_Pi/index.html
as well as a couple of RTOS options: https://all3dp.com/2/rtos-raspberry-pi-real-time-os/ and QNX: https://pidora.ca/qnx-on-raspberry-pi-the-professional-grade...
and TRON: https://www.tron.org/blog/2023/09/post-1767/ (but that's only on the Pico? I'd love to see it as a desktop OS on an rPi5)
Maintained daily. pi is 32bit and pi3 is 64bit. WiFi works.
Plan 9. It's a Unix built on top of a network. Each process can be thought of as a container as each gets a namespace which consists of a table of mounts and binds of resources. Those resources are served by an architecture neutral RPC protocol which serves a tree of objects we call files: 9P. The server of these resources is called a file server and is akin to a micro-service that runs bare metal on the CPU, the kernel is the container host. These resources are protected by Unix permissions verified by the kernel against a security service called factotum, itself a 9P server. Cloud ready, bare metal micro-service computing started in the late 80's and nobody noticed.
> I miss the days ... Plan 9, Inferno, ...
Still alive and kicking. Head over to 9front.org for a maintained fork of each. Patches arrive nearly daily to 9front and the community is encouraged to contribute. 9front also runs my home network because setting up DHCP, DNS and TFTP for PXE is dumb simple.
With the proliferation of web apps and containers, alternative operating systems are actually more feasible today. At the same time, I'm dependent on all the niceties that the macOS/iOS ecosystem offers (the integration, the sync). Something I wanted to look into is just running macOS in a very default way and then using a fast OS (such as Haiku, though their Arm64 support is not very good yet) in a fullscreen VM. With modern Apple Silicon, there's almost no performance penalty.
Trad Unix, redone in C++; not self-hosting; project lead has quit to work on a browser.
Interesting but mainly for extreme smallness. Forth will alienate a lot of people. For me, Oberon would have been a more interesting basis.
Hostile fork of Menuet OS; based on 25YO code.
I mean, yes, it is good there's interesting stuff, but these are not inspiring examples IMHO.
I suggest you to get the Risc OS Direct 'distro' and then the update which has wireless support. Read the documentation first, of course.
Risc OS it's very similar to Windows in usage (more like an Amiga), once you read the docs you will get it ready in no time. You have the old Netsurf browser on its native plataform, it's fun.
On plan9, 9front superseded it.
Nothing different would have happened. You would have used C++ instead of Objective-C to write macOS UI programs, but other then that macOS would be in the same shitty state it as is today (assuming a wildly popular iPhone would also have happened in that alternative timeline and take the focus away from desktop UI development).
One thing might probably be different: macOS might be less popular amongst developers because BeOS wasn't a UNIX clone (but 'Linux on the desktop' might actually be in a better state).
Internal efforts to get BeOS's security levels up to even Mac OS 9 levels would have bankrupted 1997 Apple.
Even in workstation space, the cool lisp/smalltalk/... developers hated unix, but the small market was filled with unixes.
Even today, most of the native developers use windows!
I am a techie, I travel a lot, and I've worked for both the 2 biggest enterprise Linux vendors.
Windows boxes are in use in both of them, and Windows laptops are everywhere -- but the second most popular platform after Windows, and the 2nd most popular after Linux inside Linux vendors, is macOS.
It's everywhere.
The partial exception was when I moved to Czechia. A decade ago it was still relatively poor. Few iDevices, few Android phones, lots of Windows Mobile then. Not many Macs except inside big companies.
But that's changed. Now they're everywhere there, too.
MacOS is huge. I see Macs everywhere, far far more than I ever see ChromeBooks in the real world. I think most ChromeBooks are probably in schools.
At work, Windows. Forget Apple, or Chrome. Companies will set AD and Windows.
On backends and high tier servers, developing machines, and so on: GNU/Linux, hands down. The IT companies virtualize Windows machines with KVM, set up an AD domain under a VM and call it a day. No Apple there.
OTOH, I have to say that iPads have a much better touchscreen for handwritten input with lightpens. Hope Android solves that soon.
I reject all your generalisations.
I'm an Irish citizen who until 2023 lived in Brno then Prague. Since I moved to the Isle of Man I've travelled and worked in Latvia, the Netherlands, Spain, Germany, Belgium, Czechia, Austria, England, Ireland, and Scotland, from memory. While I lived in Czechia I routinely travelled in all the countries it borders.
I have seen more iPhones in use than any single make of Android, and more iPads than all other forms of tablet put together. MS Surface tablets are #2 and outnumber all Android tablets put together. I am typing on a work-provided MacBook Air and my boss also uses MacBooks, and I think the majority of the company works on them, but we stopped having a central office years ago so my sampling is very ad hoc.
At both the enterprise Linux vendors where I've been a paid full-time member of staff, most managers and marketing people use MacBooks. (IMHO this is a damning indictment of desktop Linux but that's incidental.)
My own direct observations in the last decade refute your claims. I can't give you numbers but let's put it this way: the majority of the guests I invited to my wedding in 2023 I had to use Apple Messages from my iMac to contact, because they're not on any of the systems I use: Whatsapp, Signal, or Telegram.
Your experience is NOT representative.
Ah, well, the most irrelevant place from a company compared to the actual product development.
Thus, no wonder everyone sees Macs as fancy toys just to show off instead of doing actual work. The times from OSX under G4 being a really good system for A/V and press/journalism production are long gone.
It can be, but any Windows or even some Linux machine with Krita and some medium A/V tools with some -rt kernel with Pipewire can destroy OSX on performance. Ardour is no joke and people has tools like DavinCi. There's no need to spend $4-6k on a Mac Pro any more. Pick any high end Nvidia card with hardware encoding/decoding and A/V producing can be trivial.
If OSX it's just a tool do bullshit presentations, OSX it's doomed in the desktop.
The iPad it's everything else. It's really good at handwriting, and it's really good for students at uni doing tons of writtings and notes, and OFC for painting and photo manipulation.
The developer who pick it because it is a Unix are people who developer for Unix like OSes, mostly for servers, so they mostly do not produce software that drives adoption by people who are not developers.
In the world of OSes, "the same as what everyone else is using" is much more important than "new takes on old concepts".
In today's money, going with OS/2 instead would have cost me 1000 euros more when I bought my 386 PC.
[1] This was 1987 still communistic Poland, so I bought a pirated copy on a famous Warsaw computer bazaar on Grzybowska street.
I really had a special fondness for OS/2. But using it today, it really is a quirky thing. Maybe if it had won I wouldn't be thinking that way.
I agree.
https://www.theregister.com/2025/01/05/microsoft_os2_flop_fu...
I think OS/2 1.x should have targeted the 386, in the 1980s.
> Also, the RAM requirements made it pretty much impossible for people to recommend to their friends and family.
I bought OS/2 2.0 with my own cash, and ran it on several different 386SX machines in 4MB of RAM. It was usable on that spec.
Any good spec of machine for Windows 3.0 could run OS/2 2.x usefully without being unpleasant.
> It came a few years to early. I remember my shock when I get [1] my OS/2 copy and it was on 10 (yes, ten!) 3.5 inch Floppy Disk
Not so bad, really.
> while Windows needed two (one for main OS, one for some utilities).
No version of Windows came on 1 floppy.
Windows 1.01 took 4 360k disks -- here's a picture:
https://www.firstversions.com/2015/05/microsoft-windows.html
Windows 2 took 8 360k disks:
https://archive.org/details/microsoft-windows-v2.0
Windows 3 took 8 even on DS DD 720kB disks:
https://archive.org/details/windows_3.00_english_with_ms-dos...
By Windows 95 it was up to 27 high density 1.4MB disks:
https://www.reddit.com/r/interestingasfuck/comments/uopb1n/t...
10 is not bad at all for a full preemptive multitasking x86-32 OS with a GUI, IMHO.
Unfortunately, I think the ship may have sailed as it's getting too hard to both start from scratch and also provide support for everything from a web browser to drivers and so on. It was a lot easier when the to do list was 1/100th the size. The workaround is to utilize what has already been done, but then that kind of defeats the entire purpose and you just get a slightly different flavor of the same thing.
The only other real option would be a radical revision of what a computer is. Something really simple that maybe a bit closer to what Carl Sasserath was doing with iOS (no, not Apple iOS, but the internet OS via Rebol thing) iirc, or what the Forth folks do with hardware. I think Alan Kay may have talked about this as well from a smalltalk perspective. The question is if you can do anything interesting with it. I'm sure there are dozens of us that would give up YouTube and social media to have a fully understandable computing system :)
Things in the past look simple only if we look at them through today's lens.
The apparent complexity was the same.
Today I work with microservices, on top of Kubernetes on top of cloud services and I have to know a gazillion things. But I don't have the feeling that I had an easier time when I was a kid playing with C/C++ under DOS, learning assembly, writing terminate and stay resident programs, trying to write simple device drivers or trying to program the graphics hardware using whatever limited info I had access to. When I started doing desktop application using Win32 and Qt, it didn't seemed more simple than now. Learning how to use Linux syscalls, how to program for X11 in 2000 didn't seem simpler.
Of course, software was much more simpler but now we have better tools, a lot of more easy accessible information, we have developed practices, standards and methodologies to help. And since we have huge resources, we don't have to extract every last drop of performance from the hardware.
So, I don't think the life of average programmer in 70s,80s,90s,2000 was easier than now.
It only seems easy if we have to resolve problems from 40 years ago using the knowledge, tools and the hardware we have now.
I don't think it is harder on programmers now (or easier for programmers in the past). I can do things in Python that would make me a 10x programmer 40 years ago if the hardware could have supported it. They also didn't have stack overflow back then...just a dog-eared copy of some old C book. They didn't have hardware like we do that would've made a supercomputer back then look like something you'd put in a toaster today (that was a poorly worded sentence...sorry). The challenges they faced were numerous.
My point is that the actual complexity layers are much worse now. Back in the Commodore 64 days, many users knew the machine inside and out. They could program in assembly, do graphics on the display...etc, all while understanding exactly what is going on. None of that was easy or as efficient as what I can do in Excel today or some 3D graphics program, but it was something you could wrap your head around. Today, we have huge monolithic amounts of code or hardware for everything. I don't understand anything about my hardware, I don't understand the millions of lines of windows, I don't understand the millions of lines in Microsoft Office, or how my web browser works or how Unreal engine was built...etc. It's the product of millions of people working together to create something beyond the limits of a single human.
If we wanted to truly start from scratch, there's no way (that I can see) where we can reinvent all of that and actually get a large amount of users. It's not impossible, but Herculean. You could do something if the to-do list was MUCH smaller, like what was done with Temple OS or Collapse OS.
Like what?
People built chording/pen/handwriting/gesture/Kinect/Wiimotes/multitouch/mouse gesture/swipe gesture/joysticks/Griffin Powermate/3Dconnextion Spacemouse/motion tracking/eye tracking/voice recognition/etc. on current operating systems, what do you want in the way of interaction that couldn't be built with a current OS?
I'm paraphrasing an old blog post of mine, but...
In today's world, spoiled with excellent development tools, everyone has forgotten that late-1980s and early-to-mid 1990s dev tools were awful: 1970s text-mode tools for writing graphical apps.
Apple acquired NeXT because it needed an OS, but what clinched the deal was the development tools (and the return of Jobs, of course.) NeXT had industry-leading dev tools. Doom was written on NeXTs. The WWW was written on NeXTs.
Apple had OS choices – modernise A/UX, or buy BeOS, or buy NeXT, or get bought and move to Solaris or something – but nobody except NeXT had Objective-C and Interface Builder, or the NeXT/Sun foundation classes, or anything like them.
The meta-irony being that if Apple had adapted A/UX, or failing that, had acquired Be for BeOS, it would be long dead by now, just a fading memory for middle-aged graphical designers. Without the dev tools, they'd never have got all the existing Mac developers on board, and never got all the cool new apps – no matter how snazzy the OS.
There is a reason why I have 0 problems using linux on a raspberry PI, yet everytime I try to install in a real computer I got lying around I got a myriad of nosense problems which are particularly hard to solve.
If we want a new OS we need to make it for certain platform which is always identical in terms of hardware. I would say a PS3, Steam Deck, or Nintendo Switch would be good candidates.
They have plenty of identical hardware units in the market, and you could focus on the OS rather than supporting strange hardware issues.
You missed my favourite from the list: ReactOS
Hardware manufacturers write Windows drivers, Linux community write drivers for basically all consumer hardware, and apple develops both the hardware and the OS with their own drivers.
That is one big issue and another one is software.
Writing drivers and porting software means both time and money.
How ever if a new OS would bring lots of benefits to both users and companies, it might tip the scale and make the time and money investment worthwhile.
Of course, by a new OS I don't mean just another platform that ebables us to run software and use hardware, as existing OSes do that just fine.
By a new operating system, I mean one that enables us to use new computing paradigm, enable new types of software to software and software to hardware interactions and would make a big disruption to the market. Something with the same kind of impact as AI or the introduction of smartphones.
I'm pretty sure Apple would have been bankrupt a few months later if they didn't buy back Steve.
Same applies to cloud computing with language runtimes and serverless.
While it looks grim, there are some hopes for OS lovers.
Haiku has a lot of C++ 98 code or even pre-standard C++, not least all the stuff re-used with permission from BeOS. As was usual for projects at that time many fundamental building blocks are provided rather than using a language standard. For example there's BString and BList.
Haiku also has seams of BSD code where there'd be a project to do Whatever (WiFi, TLS, drivers, etc.) "properly" in a way unique to Haiku but as a stop gap here's some BSD code until we finish our own proper solution, which of course never happens.
But is there any long-lived project for which this isn't true? Linux and the BSDs surely have many components that fall into this category.
> For example there's BString and BList.
BString is a much nicer string class to work with (IMO) than std::string. It lacks some modern conveniences, and it has some unfortunate footguns where some APIs return bytes and some return UTF-8 characters (the former should probably all be considered deprecated, indeed that's a BeOS holdover), but I don't think there's any intent to drop it.
BList could be better as well, but it's still a nicer API in many ways than std::vector. Our other homegrown template classes also are nicer or have particular semantics we want that the STL classes don't, so I don't think we'd ever drop them.
> Haiku also has seams of BSD code where there'd be a project to do Whatever (WiFi, TLS, drivers, etc.) "properly" in a way unique to Haiku
What would be the point of implementing WiFi drivers from scratch "uniquely" for Haiku? Even FreeBSD has started just copying drivers from Linux, so that may be in our future as well. I don't know that anyone ever really considered writing a whole 802.11 stack for Haiku; there was some work on a "native" driver or two at one point, but it was for hardware that we didn't have support for from the BSDs, and it still used the BSD 802.11 stack. Writing our own drivers there just seems like a waste of time; we might as well contribute to the BSD ones instead.
I don't think any other project like this exists. You're coming up on your 25th anniversary without shipping the release software !
I see that BString itself also uses this weird phrase "UTF-8 character". That's not a thing, and rather than just being technically wrong it's so weird I can't tell what the people who made it thought they meant or what the practical consequences might be.
I mean, it can't be worse than std::string in one sense because hey at least it picked... something. But if I can't figure out what that is maybe it's not better.
UTF-8 has code units, but they're one byte, so distinguishing them from bytes means either you're being weird about what a "byte" is or more likely you don't mean code units.
Unicode has characters, but well lets quote their glossary: "(1) The smallest component of written language that has semantic value; refers to the abstract meaning and/or shape, rather than a specific shape (see also glyph), though in code tables some form of visual representation is essential for the reader’s understanding. (2) Synonym for abstract character. (3) The basic unit of encoding for the Unicode character encoding. (4) The English name for the ideographic written elements of Chinese origin. [See ideograph (2).]"
So given BString is software it's probably working in terms of something concrete. My best guesses (plural, like I said, I'm not sure and I'm not even sure the author realised they needed to decide):
1. UTF16 code units. This is the natural evolution of software intended for UCS-2 in a world where that's not a thing, our world.
2. Unicode code points. If you were stubbornly determined to keep doing the same thing despite the fact UCS2 didn't happen, you might get here, which is tragic
3. Unicode scalar values. Arguably useful, although in an intensely abstract way, the closest thing a bare metal language might attempt as a "character"
4. Graphemes. Humans think these are a reasonable way to cut up written language, which is a shame because machines can't necessarily figure out what is or is not a grapheme. But maybe the software tries to do this? There have been better and worse attempts.
I don't love std::vector but I can't see anything to recommend BList at all, it's all using type erased pointers, it doesn't have the correct reservation API, it provides its own weird sorting - which doesn't even say whether it's a stable sort,
Very nice OS, but I remember the programming API to be tricky since everything was multi-threaded.