…and NetBSD[0], OpenBSD[1], but apparently not DragonFly BSD[2].
[0] https://netbsd.org/docs/kernel/uvm.html
[1] https://man.openbsd.org/OpenBSD-3.0/uvm.9
[2] https://www.dragonflybsd.org/mailarchive/kernel/2011-04/msg0...
Whilst all three BSDs (386BSD, FreeBSD, and NetBSD; there was no OpenBSD in the beginning) did inherit the legacy Mach 2.5-style design, it did not live on in FreeBSD, whose core team started pretty quickly replacing all remaining vestiges of the Mach VM[0] with a complete, modern, and highly performant rewrite of the entire VM. FreeBSD 4 had none of the original Mach code left in the kernel codebase, and that happened in the late 1990s. Therefore, FreeBSD can't be referenced in a relationship to Mach apart from the initial separation/very early foundation stage.
NetBSD (and OpenBSD) went on for a while but also quickly hit the wall with the Mach design (performance, SMP/scalability, networking) and also set out on a complete rewrite with UVM (unified virtual memory) designed and led by Chuck Cranor, who wrote his dissertation on the UVM. OpenBSD later borrowed and adopted the UVM implementation, which remains in use today.
So out of all living BSD's[1], only XNU/Darwin continues to use Mach, and not Mach 2.5 but Mach 3. There have been Mach 2.5, 3 and 4 (GNU/Hurd uses Mach 4) in existence, and the compatibility between them is rather low, and remains mostly at the overall architectural level. They are better to be treated as distinct design with shared influence.
[0] Of which there were not that many to start off with.
[1] I am not sure whether DragonBSD is dead or alive today at all.
Oof, yeah.[0][1]. I hope they're doing alright - technically fascinating, and charming as they march to the beat of their own accordion.[2][3][4][5]
[0] https://www.dragonflybsd.org/release64/
[1] https://gitweb.dragonflybsd.org/dragonfly.git
[2] https://www.dragonflybsd.org/mailarchive/kernel/2012-03/msg0...
[3] http://www.bsdnewsletter.com/2007/02/Features176.html
Also note that HAMMER (the previous design) and HAMMER2 (the current design, since 2018) are two distinct, incompatible file system designs. I am not sure what is the value of mentioning the previous and abandoned design in the this context.
Right - the git repo has commits from yesterday, but it ain’t no NetBSD… (h/t ‘o11c)
> Also note that HAMMER (the previous design) and HAMMER2 (the current design, since 2018) are two distinct, incompatible file system designs. I am not sure what is the value of mentioning the previous and abandoned design in the this context.
Sure - I linked to the first for the general intro, which mentions Hammer2 in the first paragraph if anybody reads through… my mistake.
It seems to have about the same level of activity as NetBSD. Take that how you will.
[1] https://www.theregister.com/2025/03/08/kernel_sanders_apple_...
- The Mach microkernel originally supported true userland paging, like mmap but with an arbitrary daemon in place of the filesystem. You can see the interface here:
https://web.mit.edu/darwin/src/modules/xnu/osfmk/man/memory_...
But I'm not sure if Darwin ever used this functionality; it certainly hasn't used it for the last ~20 years.
- dynamic_pager never used this interface. It used a different, much more limited Mach interface where xnu could alert it when it was low on swap; dynamic_pager would create swap files, and pass them back into the kernel using macx_swapon and macx_swapoff syscalls. But the actual swapping was done by the kernel. Here is what dynamic_pager used to look like:
https://github.com/apple-oss-distributions/system_cmds/blob/...
But that functionality has since moved into the kernel, so now dynamic_pager does basically nothing:
https://github.com/apple-oss-distributions/system_cmds/blob/...
- The vast majority of kernel memory is wired and cannot be paged out. But the kernel can explicitly ask for pageable memory (e.g. with IOMallocPageable), and yes, that memory can be swapped to disk. It's just rarely used.
Still, any code that does this needs to be careful to avoid deadlocks. Even though userland is no longer involved in "paging" per se, it's still possible and in fact common for userland to get involved one or two layers down. You can have userland filesystems with FSKit (or third-party FUSE). You can have filesystems mounted on disk images which rely on userland to convert reads and writes to the virtual block device into reads and writes to the underlying dmg file (see `man hdiutil`). You can have NFS or SMB connections going through userland networking extensions. There are probably other cases I'm not thinking of.
EDIT: Actually, I may be wrong about that last bit. You can definitely have filesystems that block on userspace, but it may not be supported to put swap on those filesystems.
What's the benefit of this indirection through userspace for swap file creation? Can't the kernel create the swap file itself?
Meanwhile Linux allows you to swapon(2) just about anything. A file, a partition, a whole disk, /dev/zram, even a zvol. (The last one could lead to a nasty deadlock, don't do it.)
Perhaps the XNU/NeXT/Darwin/OSX developers wanted a similar level of flexibility? Have the right piece in place, even just as a stub?
Especially when I think about how committed they are to Darwin it really paints a poor image in my mind. The loss that open source suffers from that, and the time and money Apple has to dedicate to this with a disproportionate return.
Maybe if Apple had been able to keep classic MacOS going five years longer, or Linux had matured five years earlier, the OS X transition could have been very different. But throwing out XNU in favor of a pre-2.6 Linux kernel wouldn't have made much sense.
In any case, as others have noted, the timeline here w.r.t nextstep is backwards.
People have responded to you with timelines explaining why it couldn’t have happened but you seem to keep restating this claim without more substance or context to the time.
Imho Linux would have been the wrong choice and perhaps even the incorrect assumption. Mac is not really BSD based outside of the userland. The kernel was and is significantly different and would’ve hard forked from Linux if they did use it at the time.
Often when people say Linux they mean (the often memes) GNU/Linux , except GNU diverged significantly from the posix command line tools (in that sense macOS is truer) and the GPL3 license is anathema to Apple.
I don’t see any area where basing off Linux would have resulted in materially better results today.
What benefit would it have had at the time? What guarantees would it have given at the time that would have persisted three decades later?
Apple made its decision in 1996.
And it only became usable as Solaris/AIX/HP-UX replacement thanks to the money IBM, Oracle and Compaq pumped into Linux's development around 2000, it is even on the official timeline history.
It was a very different world. We won't even talk about audio and video playback. I was an early Linux user, having done my first install in 1993, and sadly ran Windows on my desktop then because the Linux desktop experience was awful.
>There wouldn't have been any downsides for them
Really? NO downsides???
- throwing away a decade and a half of work and engineering experience (Avie Tevanian helped write Mach, this is like having Linus being your chief of software development and saying "just switch to Hurd!")
- uncertain licensing (Apple still ships ancient bash 3.2 because of GPL)
- increased development time to a shipping, modern OS (it already took them 5 years to ship 10.0, and it was rough)
That's just off the top of my head. I believe you think there wouldn't have been any downsides because you didn't stop to think of any, or are ideaologically disposed to present the Linux kernel in 1996 as being better or safer than XNU.
Well, there’s a parallel universe! Beige boxes running BeOS late-90s-cool maybe, but would we still have had the same upending results for mobile phones, industrial design, world integration, streaming media services…
If by biggest rival you mean Microsoft, it was Microsoft who saved Apple from bancrupcy in 1997.
The fact Microsoft announced they were investing, and that they were committed to continue shipping Office to Mac, definitely helped.
As things now stand, FreeBSD represents many of the benefits of Darwin and the open source nature of Linux. If you seek a more secure environment without Apple's increasing levels of lock-in, then FreeBSD (and the other BSDs) merit consideration for deployment.
As I understand it Mach was based on BSD and was effectively a hybrid with much of the existing BSD kernel running as a single big task under the microkernel. Darwin has since updated the BSD kernel under microkernel with the current developments from FreeBSD.
"Mach was developed as a replacement for the kernel in the BSD version of Unix," (https://en.wikipedia.org/wiki/Mach_(kernel))
Interestingly, MkLinux was the same type of project but for Linux instead of BSD (i.e. Linux userland with Mach kernel).
> Throughout this time the promise of a "true" microkernel had not yet been delivered. These early Mach versions included the majority of 4.3BSD in the kernel, a system known as a POE Server, resulting in a kernel that was actually larger than the UNIX it was based on.
And https://en.wikipedia.org/wiki/XNU
> XNU was originally developed by NeXT for the NeXTSTEP operating system. It was a hybrid kernel derived from version 2.5 of the Mach kernel developed at Carnegie Mellon University, which incorporated the bulk of the 4.3BSD kernel modified to run atop Mach primitives,
MkLinux is similar. https://en.wikipedia.org/wiki/MkLinux
> The name refers to the Linux kernel being adapted to run as a server hosted on the Mach microkernel, version 3.0.
Last I tried (~10 years ago) I gave up and I assumed FreeBSD was a Server OS, because I couldn't for the life of me get Nvidia drivers working in native resolution. I don't recall specifics but Bluetooth was problematic also.
Looks like (some) laptops might sleep and wifi is on the way! (with help from Linux drivers)
No. FreeBSD has committed the original sin of UNIX by deliberately dropping support for all non-Intel architectures, intending to focus on optimising FreeBSD for the Intel ISA and platforms. UNIX portability and support for a diverse range of CPU's and hardware platforms are ingrained in the DNA of UNIX, however.
I would argue that FreeBSD has paid the price for this decision – FreeBSD has faded into irrelevance today (despite having introduced some of the most outstanding and brilliant innovations in UNIX kernel design) – because the FreeBSD core team bet heavily on Intel remaining the only hardware platform in existence, and they missed the turn (ARM, RISC-V, and marginally MIPS in embdedded). Linux stepped in and filled in the niche very quickly, and it now runs everywhere. FreeBSD is faster but Linux is better.
And it does not matter that Netflix still runs FreeBSD on its servers serving up the content at the theoretical speed of light – it is a sad living proof of FreeBSD having become a niche within a niche.
P.S. I would also argue that the BSD core teams (Free/Net/Open) were a major factor in the downfall of all BSD's, due to their insular nature and, especially in the early days, a near-hostile attitude towards outsiders. «Customers» voted with their feet – and chose Linux.
In my opinion the single factor that has contributed the most to a greater success for Linux than for FreeBSD has been the transition to multithreaded and multicore CPUs even in the cheapest computers, which has started in 2003 with the SMT Intel Pentium 4, followed in 2005 by the dual-core AMD CPUs.
Around 2003, FreeBSD 4.x was the most performant and the most reliable operating system for single-core single-thread CPUs, for networking or storage applications, well above Linux or Microsoft Windows (source: at that time I was designing networking equipment and we had big server farms on which the equipment was tested, under all operating systems).
However it could not use CPUs with multiple cores or threads, so on such CPUs it fell behind Linux and Windows. The support introduced in FreeBSD 5.x was only partial and many years have passed until FreeBSD had again a competitive performance on up-to-date CPUs. Other BSD variants were even slower in their conversion to multithreaded support. During those years the fraction of users of *BSD systems has diminished a lot.
The second most important factor has been the much smaller set of device drivers for various add-on interface cards than for Linux. Only few hardware vendors have provided FreeBSD device drivers for their products, mostly only Intel and NVIDIA, and for the products of other vendors there have been few FreeBSD users able to reverse engineer them and write device drivers, in comparison with Linux.
The support for non-x86 ISAs has also been worse than in Linux, but this was just a detail among the general support for less kinds of hardware than Linux.
All this has been caused by positive feedback, FreeBSD has started with fewer users, because by the time when the lawsuits have been settled favorably for FreeBSD most potential users had already started to use Linux. Then the smaller number of users have been less capable of porting the system to new hardware devices and newer architectures, which has lead to even lower adoption.
Nevertheless, there have always been various details in the *BSD systems that have been better than in Linux. A few of them have been adopted in Linux, like the software package systems that are now ubiquitous in Linux distributions, but in many cases Linux users have invented alternative solutions, which in enough cases were inferior, instead of studying the *BSD systems and see whether an already existing solution could be adopted instead of inventing yet another alternative.
The first mistake was that all BSD core teams flatly refused to provide native support for the JVM back in its heyday. They eventually partially conceded and made it work using Linux emulation; however, it was riddled with bugs, crashes and other issues for years before it could run Java server apps. Yet, users clamoured to run Java applications, like, now and vociferously.
The second grave mistake was to flatly refuse to support containerisation (Docker) due to not being kosher. Linux based containerisation is what underpins all cloud computing today. Again, the FreeBSD arrived too late, and it was too little.
P.S. I still hold the view that FreeBSD made matters even worse by dropping support for non-Intel platforms early on – at a stage when its bleak future was already all but certain. New CPU architectures are enjoying a renaissance, whilst FreeBSD nervously sucks its thumb by the roadside of history.
1. Minimal kernel isolation.
2. Optional network stack isolation via VNET (but not used by default).
3. Rudimentary resource controls with no default enforcement (important!).
4. Simple capability security model.
Most importantly, since FreeBSD was a very popular choice for hosting providers at the time, jails were originally invented to fully support partitioned-off web hosting, rather than to run self-sufficient, fully contained (containerised) applications as first-class citizens.The claim to have invented true containers belongs to Solaris 10 (not Linux) and its zones. Solaris 10 was released in January 2005.
Seems pretty extensive to me, including R/W bytes/s and R/W ops/s:
* https://docs.freebsd.org/en/books/handbook/jails/#jail-resou...
* https://klarasystems.com/articles/controlling-resource-limit...
Isolation: With rootless Podman it seems to be on the same level as Jails - but only if You run Podman with SELinux or AppArmor enabled. Without SELinux/AppArmor the Jails offer better isolation. When you run Podman with SELinux/AppArmor and then you add MAC Framework (like mac_sebsd/mac_jail/mac_bsdextended/mac_portacl) the Jails are more isolated again.
Kernel Syscalls Surface: Even rootless Podman has 'full' syscall access unless blocked by seccomp (SELinux). Jails have restricted use of syscalls without any additional tools - and that can be also narrowed with MAC Framework on FreeBSD.
Firewall: You can not run firewall inside rootless Podman container. You can run entire network stack and any firewall like PF or IPFW independently from the host inside VNET Jail - which means more security.
TL;DR: FreeBSD Jails are generally more secure out-of-the-box compared to Podman containers and even more secure if you take the time to add additional layers of security.
> How battle-tested are FreeBSD Jails?
Jails are in production since 1999/2000 when they were introduced - so 25 years strong - very well battle tested.
Docker is with us since 2014 so that means about 10 years less - but we must compare to Podman ...
Rootless support for Podman first appeared late 2019 (1.6) so only less then 6 years to test.
That means Jails are the most battle tested of all of them.
Hope that helps.
Regards,
vermaden
There were two problems.
The first was that FreeBSD really wanted to own the whole disk. If you wanted to dual boot with DOS/Windows you were supposed to put FreeBSD on a separate disk. Linux was OK with just having a partition on the same disk you had DOS/Windows on. For those of us whose PCs only had one hard disk, buying a copy of Partition Magic was cheaper than buying a second hard disk.
The reason for this was that the FreeBSD developers felt that multiple operating system on the same disk was not safe due to the lack of standards for how to emulate a cylinder/head/sector (CHS) addressing scheme on disks that used logical block addressing (LBA). They were technically correct, but greatly overestimated the practical risks.
In the early days PC hard disks used CHS addressing, and the system software such as the PC BIOS worked in those terms. Software using the BIOS such as DOS applications and DOS itself worked with CHS addresses and the number of cylinders, heads, and sectors per track (called the "drive geometry") they saw matched the actual physical geometry of the drive.
The INT 13h BIOS interface for low level disk access allowed for a maximum of 1024 cylinders, 256 heads, and 63 sectors per track (giving a maximum possible drive size of 8 GB if the sectors were 512 bytes).
At some point as disks got bigger drives with more than 63 sectors per track became available. If you had a drive with for example 400 cylinders, 16 heads, and 256 sectors per track you would only be able to access about 1/4 of the drive using CHS addressing that uses the actual drive geometry.
It wasn't really practical to change the INT 13h interface to give the sectors per track more bits, and so we entered the era of made up drive geometries. The BIOS would see that the disk geometry is 400/16/256 and make up a geometry with the same capacity that fit within the limits, such as 400/256/16.
Another place with made up geometry was SCSI disks. SCSI used LBA addressing. If you had a SCSI disk on your PC whatever implemented INT 13h handling for that (typically the BIOS ROM on your SCSI host adaptor) would make up a geometry. Different host adaptor makers might use different algorithms for making up that geometry. Non-SCSI disk interfaces for PCs also moved to LBA addressing, and so the need to make up a geometry for INT 13h arose with those too, and different disk controller vendors might use a different made up geometry.
So suppose you had a DOS/Windows PC, you repartitioned your one disk to make room for FreeBSD, and went to install FreeBSD. FreeBSD does not use the INT 13h BIOS interface. It uses its own drivers to talk to the low level disk controller hardware and those drivers use LBA addressing.
It can read the partition map and find the entry for the partition you want to install on. But the entries in the partition map use CHS addressing. FreeBSD would need to translate the CHS addresses from the partition map into LBA addresses, and to do that it would need to know the disk geometry that whatever created the partition map was using. If it didn't get that right and assumed a made up geometry that didn't match the partitioner's made up geometry the actual space for DOS/Windows and the actual space for FreeBSD could end up overlapping.
In practice you can almost always figure out from looking at the partition map what geometry the partitioner used with enough accuracy to avoid stomping on someone else's partition. Partitions started at track boundaries, and typically the next partition started as close as possible to the end of the previous partition and that sufficiently narrows down where the partition is supposed to be in LBA address space.
That was the approach taken by most SCSI vendors and it worked fine. I think eventually FreeBSD did start doing this too but by then Linux had become dominant in the "Dual boot DOS/Windows and a Unix-like OS on my one disk PC" market.
The other problem was CD-ROM support. FreeBSD was slow to support IDE CD-ROM drives. Even people who had SCSI on their home PC and used SCSI hard disks were much more likely to have an IDE CD-ROM than a SCSI CD-ROM. SCSI CD-ROM drives were several times more expensive and it wasn't the interface that was the bottleneck so SCSI CD-ROM just didn't make much sense on a home PC.
For many then it came down to with Linux they could install they didn't need a two disk system and they could install from a convenient CD-ROM, but for FreeBSD they would need a dedicated disk for it and would have to deal with a stack of floppies.
I loved that!
IMO it's really a mixture of factors, some I can think of:
- BSD projects were slowed down by the AT&T lawsuit in the early 90ies.
- FreeBSD focused more on expert users, whereas Linux distributions focused on graphical installers and configuration tools early on. Some distributions had graphical installers at the end of the 90ies. So, Linux distributions could onboard people who were looking for a Windows alternative much more quickly.
- BSD had forks very early on (FreeBSD, NetBSD, OpenBSD, BSDi). The cost is much higher than multiple Linux distributions, since all BSDs maintain their own kernel and userland.
- The BSDs (except BSDi) were non-profits, whereas many early Linux distributions were by for-profit companies (Red Hat, SUSE, Caldera, TurboLinux). This gave Linux a larger development and marketing budget and it made it easier to start partnerships with IBM, SAP, etc.
- The BSDs projects were organized as cathedrals and more hierarchical, so made it harder for new contributors to step in.
- The BSD projects provided full systems, whereas in Linux distributions would piece together systems. This made Linux development messier, but allowed quicker evolution and made it easier to adapt Linux for different applications.
- The GPL put a lot more pressure on hardware companies to contribute back to the Linux kernel.
Besides that there is probably also a fair amount of randomness involved.
Linux has been running uninterruptedly on s/390 since October 1999 (31-bit support, Linux v2.2.13) and since January 2001 for 64-bit (Linux v2.4.0). Linux mainlined PPC64 support in August 2002 (Linux v2.4.19), and it has been running on ppc64 happily ever since, whereas FreeBSD dropped ppc64 support around 2008–2010. Both s/390 and ppc64 (as well as many others) are hardly hobbyist platforms, and both remain in active use today. Yes, IBM was behind each port, although the Linux community has been a net $0 beneficiary of the porting efforts.
I am also of the opinion that licensing is a red herring, as BSD/MIT licences are best suited for proprietary, closed-source development. However, the real issue with proprietary development is its siloed nature, and the fact that closed-source design and development very quickly start diverging from the mainline and become prohibitively expensive to maintain in-house long-term. So the big wigs quickly figured out that they could make a sacrifice and embrace the GPL to reduce ongoing costs. Now, with the *BSD core team-led development, new contributors (including commercial entities) would be promptly shown the door, whereas the Linux community would give them the warmest welcome. That was the second major reason for the downfall of all things BSD.
The lawsuit was settled in Feb 1994, FreeBSD was started in 1993. FreeBSD was started because development on 386BSD was too slow. It took FreeBSD until Nov 1994 until it rebased on BSD-Lite 4.4 (in FreeBSD 2.0.0).
At the time 386BSD and then FreeBSD were much more mature than Linux, but it took from 1992 until the end of 1994 for the legal clarity around 386BSD/FreeBSD to clear up. So Linux had about three years to try to catch up.
FreeBSD supports amd64 and aarch64 as Tier 1 platforms and a number of others (RiscV, PowerPC, Arm7) as Tier 2
FreeBSD started demoting non-Intel platforms around 2008-2010, with FreeBSD 11 released in 2016 only supporting x86. The first non-Intel architecture support was reinstated in April 2021, with the official release of FreeBSD 13, which is over a decade of the time having been irrevocably lost.
Plainly, FreeBSD has missed the boat – the first AWS Graviton CPU was released in 2018, and it ran Linux. Everything now runs Linux, but it could have been FreeBSD.
It is not only Netflix, Sony is also quite found of cherry picking stuff from BSDs to their Orbit OS.
Finally, I would assert Linux kernel as we know it today, is only relevant as the ones responsible for its creation still walk this planet, and like every project, when the creators are no longer around it will be taken into directions that no longer match the original goals.
https://en.m.wikipedia.org/wiki/MkLinux
I don’t think there was any work done on bringing the Macintosh GUI and application ecosystem to Linux. However, until the purchase of NeXT, Apple already had the Macintosh environment running on top of Unix via A/UX (for 68k Macs) and later the Macintosh Application Environment for Solaris and HP-UX; the latter ran Mac OS as a Unix process. If I remember correctly, the work Apple did for creating the Macintosh Application Environment laid the groundwork for Rhapsody’s Blue Box, which later became Mac OS X’s Classic environment. It is definitely possible to imagine the Macintosh Application Environment being ported to MkLinux. The modern FOSS BSDs were also available in 1996, since this was after the settlement of the lawsuit affecting the BSDs.
Of course, running the classic Mac OS as a process on top of Linux, FreeBSD, BeOS, Windows NT, or some other contemporary OS was not a viable consumer desktop OS strategy in the mid 1990s, since this required workstation-level resources at a time when Apple was still supporting 68k Macs (Mac OS 8 ran on some 68030 and 68040 machines). This idea would’ve been more viable in the G3/G4 era, and by the 2000s it would have be feasible to give each classic Macintosh program its own Mac OS process running on top of a modern OS, but I don’t think Apple would have made it past 1998 without Jobs’ return, not to mention that the NeXT purchase brought other important components to the Mac such as Cocoa, IOKit, Quartz (the successor to Display PostScript) and other now-fundamental technologies.
QTML (which became the foundation of the Carbon API) was OS agnostic. The Windows versions of QuickTime and iTunes used QTML, and in an alternate universe Apple could've empowered developers to bring Mac OS apps to Windows and Linux with a more mature version of that technology.
MkLinux was released in February 1996 whilst Copland got officially cancelled in August 1996.
So it's definitely conceivable that internally they were considering to just give up on the Copland microkernel and run it all on Linux. And maybe this was a legitimate third option to BeOS and NeXT that was never made public.
Taken a different way, it feels similar to suggesting Apple should rebase safari on chromium.
XNU is only partially open sourced – the core is open sourced, but significant chunks are missing, e.g. APFS filesystem.
Forking Linux might have legally compelled them to make all kernel modules open source–which while that would likely be a positive for humanity, isn't what Apple wants to do
Stallman after speaking with lawyers rejected this.
https://sourceforge.net/p/clisp/clisp/ci/default/tree/doc/Wh...
Look for "NeXT" on this page.
2) Apple had no money or time to invest in rewriting NeXTStep for a completely new kernel they had no experience in. Especially when so many of the dev team was involved in sorting out Apple's engineering and tech strategy as well as all the features needed to make it more Mac like.
3) Apple was still using PowerPC at the time which NeXTStep supported but Linux did not. It took IBM a couple of years to get Linux running.
And even if they had had the money and time, Avie Tevanian¹ was a principal designer and engineer of Mach². There was no NeXTSTEP-based path where the Mach-derived XNU would not be at the core of Apple's new OS family.
¹ https://en.wikipedia.org/wiki/Avie_Tevanian ² https://en.wikipedia.org/wiki/Mach_(kernel)
I think it's hard to understate how much traction Linux had in the late 90's/ early 2000's. It felt like ground breaking stuff was happening pretty much all the time, major things were changing rapidly every release and it felt exciting and genuinely revolutionary to download updates and try out all the new things it really felt like you were on the bleeding edge, your system would break all the time but it was fun and exciting.
I remember reading Slashdot daily being excited to try out every new distribution I'd see on distrowatch, I'd download and build kernels fairly regularly etc.
Things I can remember from back in those days:
- LILO to GRUB boot loader changes
- Going from EXT2 to EXT3 and all the other experimental filesystems that kept coming out.
- Sound system changing from OSS to ASLA
- Introduction of /sys
- Gentoo and all the memes (funroll-loops website)
- Udev and being able to hotplug usb devices
- Signalfd
- Splice/VMsplice
- Early wireless support and the cursed "ndiswrapper"
Nowadays Linux is pretty stable and dare I say it "boring" (in a good way). It's probably mostly because I've gotten older and have way less free time to spend living on the bleeding edge. It feels like Linux has gone from something you had to wrestle with constantly to have a working system to a spot where nowadays everything "mostly works" out of the box. I can't remember last time I've had to cntrl + alt + backspace my desktop for example.
Last major thing I can remember hearing about and being excited for was io_uring.
I broadly agree, but it is more nuanced than that. They actually had experience with Linux. Shortly before acquiring NeXT, they did the opposite of what you mentioned and ported Linux to the Mach microkernel for their MkLinux OS. It was cancelled at some point, but had things turned a bit differently, it could have ended up more important than it actually did.
Also, one thing you'll notice about big companies - they know that not only is time valuable, worst-case time is important too. If someone in an open-source ecosystem CAN delay your project, that's almost as bad as if they regularly DO delay your project. This is why big companies like Google tend to invent everything themselves, I.E. Google may have "invented Kubernetes" (really, an engineer at Google uninvolved with the progenitor of K8s - Borg - invented it based on Borg), but they still use Borg, which everyone Xoogler here likes to say is "not as good as k8s". Yet they still use it. Because it gives them full control, and no possibility of outsiders slowing them down.
They have a long history with XNU and BSD. And Linux has s GPL license which might not suit Apple.
>Especially when I think about how committed they are to Darwin it really paints a poor image in my mind. The loss that open source suffers from that, and the time and money Apple has to dedicate to this with a disproportionate return.
They share a lot of code with FreeBSD, NetBSD and OpenBSD. Which are open source. And Darwin is open source, too. So there's no loss that open source suffers.
I don't know what the loss that open source suffers is in this context?
I don't think Apple would need to spend less time or money on their kernel grafted ontop of Linux 2.4 vs their kernel grafted on top of FreeBSD 4.4
Imagine if Apple decided to open source Darwin: wouldn't that be a big win for open source?
Edit: I see it's even cited at the end of this article. Truly a source for the (macOS) ages.
Or Inside Windows NT, if you want "version 1" of the Internals series. Or read the Windows NT OS/2 Design Workbook - https://computernewb.com/~lily/files/Documents/NTDesignWorkb....
Yes, Win32 is just one personality, but a required one. OpenNT, Interix, SFU, SUA will ride alongside Win32. And of course there was the official OS/2 personality.
Not really... although NT was designed to run multiple "personalities" (or "environment subsystems" to use the official term), relatively early in its development they decided to make Win32 the "primary" environment subsystem, with the result that the other two subsystems (OS/2 and POSIX) ended up relying on Win32 for essential system services.
I think this multiple personalities thing was the original vision but it never really took off in the way its original architects intended – although there used to be OS/2 and POSIX subsystems, Microsoft never put a great deal of effort into them, and now them are both dead, so Win32 is the only environment subsystem left.
Yes, there is WSL, but: WSL1 is not an environment subsystem in the classic NT sense – it has a radically different implementation from the old OS/2 and POSIX subsystems, a "picoprocess provider". And WSL2 is just a Linux virtual machine.
No, what you quoted in your comment you are replying to is accurate. What you are saying in this comment isn’t.
> Confirmed that os/2 2.0 was a skinning and compatibility layer for NT it came out for OS/2,
This is confused. OS/2 was not a “skinning and compatibility layer for NT” it was a completely separate operating system.
I think at one point NT was going to be OS/2 2.0, and then it was going to be OS/2 3.0 - but the OS/2 2.0 which eventually ended up shipping had nothing to do with NT, it was IBM’s independent work, in which Microsoft was uninvolved (except maybe in its early stages).
I have three charts on my wall( now 4): the Unix timeline, the windows timeline, and the Linux distribution tree,and now a very decent MacOS X timeline.
The personalities became containers which is just the windows version of common subsystem virtualization. Containers were based on VirtualPC, but with the genius of Mark Russinivich.
It was there from NT 3.1 until Windows 2000; it was removed in Windows XP onwards.
It was very limited – it only supported character mode 16-bit OS/2 1.x applications. 32-bit apps, which IBM introduced with OS/2 2.0, were never supported. Microsoft offered an extra cost add-on called "Microsoft OS/2 Presentation Manager For Windows NT" aka "Windows NT Add-On Subsystem for Presentation Manager", which added support for GUI apps (but still only 16-bit OS/2 1.x apps) – which was available for NT version 3.1 thru 4.0, I don't believe it was offered for Windows 2000.
The main reason why it existed – OS/2 1.x was jointly developed by IBM and Microsoft, with both having the right to sell it – so some business customers bought Microsoft OS/2 and then used it as the basis for their business applications – when Microsoft decided to replace Microsoft OS/2 with Windows NT, they needed to provide these customers with backward compatibility and an upgrade path, lest they jump ship to IBM OS/2 instead. But Microsoft never tried to support 32-bit OS/2, since Microsoft never sold it, and given their "divorce" with IBM they didn't have the rights to ship it (possibly they might have retained rights to some early in-development version of OS/2 2.0 from before the breakup, but definitely not the final shipped OS/2 2.0 version) – the OS/2 subsystem wasn't some completely from-scratch emulation layer, it was actually based off the OS/2 code, with the lower levels rewritten to run under Windows NT, but higher level components included OS/2 code largely unchanged.
> for which Cutler would take extreme umbridge in.
Windows NT was originally called NT OS/2, because it was originally going to be Microsoft OS/2 3.0. Part way through development – but at which point Cutler and his team had already got the basics of the OS up and running on Microsoft Jazz workstations (in-house Microsoft workstation design using Intel i860 RISC CPUs) – Microsoft and IBM had a falling out and there was a change of strategy, instead of NT providing a 32-bit OS/2 API, they'd extend the 16-bit Windows 3.x API to 32-bit and use that. So I doubt Cutler would take "extreme umbrage" at something which was the plan at the time he was hired, and remained the plan through the first year or two of NT's development.
> The personalities became containers which is just the windows version of common subsystem virtualization.
Containers and virtualization are (at least somewhat) successors to personalities / environment subsystems in terms of the purpose they serve – but in terms of the actual implementation architecture, they are completely different.
https://learn.microsoft.com/en-us/virtualization/windowscont...
https://learn.microsoft.com/en-us/windows/win32/procthread/j...
Correcting myself: Microsoft Jazz machines used MIPS not i860; the i860 machines were Microsoft Dazzle
NT for MIPS actually ended up shipping for customers; I believe NT for i860 was abandoned as soon as the MIPS port was usable
This was obviously due to the divorce, but also that the Cruiser API wasn't finalized.
> Our initial OS/2 API set centers around the evolving 32-bit Cruiser, or OS/2 2.0 API set. (The design of Cruiser APIs is being done in parallel with the NT OS/2 design.)
...
> Given the nature of OS/2 design (the joint development agreement), we have had little success in influencing the design of the 2.0 APIs so that they are portable and reasonable to implement on non-x86 systems.
Interestingly it does mention 32-bit OS/2 API support (Dos32* APIs). I’m not sure if this was just a plan never implemented or if they did implement it but pulled it out before shipping NT 3.1.
I now realise Microsoft did ship a beta SDK for OS/2 2.0 - but the last Microsoft pre-release of OS/2 2.0 is missing a lot of stuff compared to the final IBM OS/2 release - most notably the Workplace Shell (WPS). IBM originally planned to ship WPS as part of OfficeVision for OS/2 and only moved it into the core OS/2 product rather late in the 2.0 development cycle.
There are some very tiny points, but this is easily very best to date. ( I started with Rhapsody, and Linux in swedish, and NT 3.1 )( Ran MKLinux on a 7100, but never got accelerated video to work. )
The keystone is the code signing system. It's what allows apps to be granted permissions, or to be sandboxed, and for that to actually stick. Apple doesn't use ELF like most UNIXs do, they use a format called Mach-O. The differences between ELF and Mach-O aren't important except for one: Mach-O supports an extra section containing a signed code directory. The code directory contains a series of hashes over code pages. The kernel has some understanding of this data structure and dyld can associate it with the binary or library as it gets loaded. XNU checks the signature over the code directory and the VMM subsystem then hashes code pages as they are loaded on demand, verifying the hashes match the signed hash in the directory. The hash of the code directory therefore can act as a unique identifier for any program in the Apple ecosystem. There's a bug here: the association hangs off the Mach vnode structure so if you overwrite a signed binary and then run it the kernel gets upset and kills the process, even if the new file has a valid signature. You have to actually replace the file as a whole for it to recognize the new situation.
On top of this foundation Apple adds code requirements. These are programs written in a small expression language that specifies constraints over aspects of a code signature. You can write a requirement like, "this binary must be signed by Apple" or "this binary can be of any version signed by an entity whose identity is X according to certificate authority Y" or "this binary must have a cdhash of Z" (i.e. be that exact binary). Binaries can also expose a designated requirement, which is the requirement by which they'd like to be known by other parties. This system initially looks like overkill but enables programs to evolve whilst retaining a stable and unforgeable identity.
The kernel exposes the signing identity of tasks to other tasks via ports. Requirements can then be imposed on those ports using a userspace library that interprets the constraint language. For example, if a program stores a key in the system keychain (which is implemented in user space) the keychain daemon examines the designated requirement of the program sending the RPC and ensures it matches future requests to use the key.
This system is abstracted by entitlements. These are key=value pairs that express permissions. Entitlements are an open system and apps can define their own. However, most entitlements are defined by Apple. Some are purely opt-in: you obtain the permission merely by asking for it and the OS grants it automatically and silently. These seem useless at first, but allow the App Store to explain what an app will do up front, and more generally enable a least-privilege stance where apps don't have access to things unless they need them. Some require additional evidence like a provisioning profile: this is a signed CMS data structure provided by Apple that basically says "apps with designated requirement X are allowed to use restricted entitlement Y", and so you must get Apple's permission to use them. And some are basically abused as a generic signed flags system; they aren't security related at all.
The system is then extended further, again through cooperation of userspace and XNU. Binaries being signable is a start but many programs have data files too. At this point the Apple security system becomes a bit hacky IMHO: the kernel isn't involved in checking the integrity of data files. Instead a plist is included at a special place in the slightly ad-hoc bundle directory layout format, the plist contains hashes of every data file in the bundle (at file not page granularity), the hash of the plist is placed in the code signature, and finally the whole thing is checked by Gatekeeper on first run. Gatekeeper is asked by the kernel if it's willing to let a program run and it decides based on the presence of extended attributes that are placed on files and then propagated by GUI tools like web browsers and decompression utilities. The userspace OS code like Finder invokes Gatekeeper to check out a program when it's been first downloaded, and Gatekeeper hashes every file in the bundle to ensure it matches what's signed in the binaries. This is why macOS has this slow "Verifying app" dialog that pops up on first run. Presumably it's done this way to avoid causing apps to stall when they open large data files without using mmap, but it's a pity because on fast networks the unoptimized Gatekeeper verification can actually be slower than the download itself. Apple doesn't care because they view out-of-store distribution as legacy tech.
Finally there is Seatbelt, a Lisp-based programming language for expressing sandbox rules. These files are compiled in userspace to some sort of bytecode that's evaluated by the kernel. The language is quite sophisticated and lets you express arbitrary rules for how different system components interact and what they can do, all based on the code signing identities.
The above scheme has an obvious loophole that was only closed in recent releases: data files might contain code and they're only checked once. In fact for any Electron or JVM app this is true because the code is in a portable format. So, one app could potentially inject code into another by editing data files and thus subvert code signing. To block this in modern macOS Seatbelt actually sandboxes every single app running. AFAIK there is no unsandboxed code in a modern macOS. One of the policies the sandbox imposes is that apps aren't allowed to modify the data files of other apps unless they've been granted that permission. The policy is quite sophisticated: apps can modify other apps if they're signed by the same legal entity as verified by Apple, apps can allow others matching code requirements to modify them, and users can grant permission on demand. To see this in action go into Settings -> Privacy & Security -> App Management, then turn it off for Terminal.app and (re)start it. Run something like "vim /Applications/Google Chrome.app/Contents/Info.plist" and observe that although the file has rw permissions vim thinks it's read-only.
Now, I'll admit that my understanding of how this works ends here because I don't work for Apple. AFAIK the kernel doesn't understand app bundles, and I'm not sure how it decides whether an open() syscall should be converted to read only or not. My guess is that the default Seatbelt policy tells the kernel to do an upcall to a security daemon which understands the bundle format and how to read the SQLite permission database. It then compares the designated requirement of the opener against the policies expressed by the bundle and the sandbox to make the decision.
In my opinion "security" should always refer to the security of the computer owners or users.
These Apple features may be used for enhancing security, but the main purpose for which they have been designed is to provide enhanced control of the computer vendor on how the computer that they have sold, and which is supposed to no longer belong to them, is used by its theoretical owner, i.e. by allowing Apple to decide which programs are run by the end user.
Even in the default out-of-the-box configuration, Apple isn't exercising editorial control over what apps you can run. Out of store distribution requires only a verified identity and a notarization pass, but notarization is a fully automated malware scan. There's no human in the loop. The App Store is different, of course.
Could Apple close up the Mac? Yes. The tech is there to do so and they do it on iOS. But... people have been predicting they'd do this from the first day the unfortunately named Gatekeeper was introduced. Yet they never have.
I totally get the concern and in the beginning I shared it, but at some point you have to just stop speculating give them credit for what they've actually done. It's much easier to distribute an app Apple executives don't like to a Mac than it is to distribute an app Linux distributors don't like to Linux users, because Linux app distribution barely works if you go "out of store" (distro repositories). In theory it should be the other way around, but it's not.
Perhaps not in the strictest sense, but Apple continues to ramp up the editorial friction for the end user to run un-notarized applications.
I feel/felt <macOS 15 that right-click Open was an OK approach, but as we know that's gone. It's xattr or Settings.app. More egregious is the monthly reminder that an application is doing something that you want it to do.
A level between "disable all security" and what macOS 15 introduces would be appreciated.
https://news.ycombinator.com/newsguidelines.html
Your reply could have omitted the first sentence.
Many years ago, at Macworld San Francisco, I met "Perry the Cynic", the Apple engineer who added code signing to Mac OS X. Nice person, but I also kind of hate him and wish I could travel back in time to stop this all from happening.
The purpose of these is for daemons to inherit the priority of their client apps when doing work for them.
I don't remember what thread groups/workloads are for.
That makes me wonder: how hard would it be to run the XNU kernel in something like a “Mach mode”, where you take the same kernel and drivers but run them separately as the Mach microkernel was intended?
I feel like from a security standpoint, a lot of situations would gladly call for giving up a little bit of performance for the process isolation security benefits that come from running a microkernel.
Is anybody here familiar enough with XNU to opine on this?
After a cursory Google search, I found this article:
https://www.zdnet.com/article/zfs-on-snow-leopard-forget-abo...
1. Integration of the kernel's VM with ZFS's adaptive replacement cache which runs in user space – memory pressure cooperation, page accounting and unified memory management. It also requires extensive VM modifications to support ZFS's controlled page eviction, fine-grained dirty page tracking, plus other stuff.
2. VMM alignment with the ZFS transactional semantics and intent logs – delayed write optimisations, efficient page syncing.
3. Support for large memory pages and proper memory page alignment – support for superpages (to reduce the TLB pressure and to efficiently map large ZFS blocks efficiently) and I/O alignment awareness (to ensure proper alignment of memory pages to avoid unnecessary copies).
4. Memory-mapped I/O: different implementation of mmap and support for lazy checksumming for mmap pages.
5. Integration with kernel thread management and scheduling, co-opertation with VMM memmory allocators.
6. … and the list goes on and on.
ZFS is just not the right answer for consumer facing and mobile/portable devices due being a heavyweight server design with vastly different design provisions and due to being the answer to a entirely different question.FYI: Apple did a bunch of that work. They ported ZFS to OSX shortly after it was open sourced. With with only support landing in 10.5. With it being listed as an upcoming feature in 10.6.
But something happened and they abandoned it. The rumour is a sun exec let the cat out of the bag about it being the next main filesystem for osx (ie not just support for non root drives) and this annoyed Jobs so much he canned the whole project.
The reality is NetApp sued Sun/Oracle over ZFS patents.
https://www.theregister.com/2010/09/09/oracle_netapp_zfs_dis...
It is not a given that ZFS would have performed well within the tight hardware constraints of the first ten or so generations of the iPhone – file systems such as APFS, btrfs or bcachefs are better suited for the needs of mobile platforms.
Another conundrum with ZFS is that ZFS disk pools really, really want a RAID setup, which is not a consumer grade thing, and Apple is a consumer company. Even if ZFS did see the light back then, there is no guarantee it would have lived on – I am not sure, anyway.
Very petty if true.
That’s one of the famous rumors.
As others here have said Oracle bought Sun two years later. Between me increased memory requirements, uncertainty due to Sun’s status as an ongoing concern, and who knows what else maybe it really did make sense not to go forward.
> avoid the runtime overhead of Objective-C in the kernel
From Apple docs[0]:
You typically don’t need to use the Objective-C runtime library directly when programming in Objective-C. This API is useful primarily for developing bridge layers between Objective-C and other languages, or for low-level debugging.
0. https://developer.apple.com/documentation/objectivec/objecti...
For example, how to map class objects to string representations of their names.
- Discussion of paging mixes together some concepts as I described in [1].
- Mach port "rights" are not directly related to entitlements. Port rights are port of the original Mach design; entitlements are part of a very different, Apple-specific security system grafted on much later. They are connected in the sense that Mach IPC lets the receiver get an "audit token" describing the process that sent them, which it can then use to look up entitlements.
- All IOKit calls go through Mach IPC, not just asynchronous events.
- "kmem" (assuming this refers to the kmem_* functions) is not really a “general-purpose kernel malloc”; that would be kalloc. The kmem_* functions are sometimes used for allocations, but they’re closer to a “kernel mmap” in the sense that they always allocate new whole pages.
- It’s true that xnu can map the same physical pages into multiple tasks read-only, but that’s nothing special. Every OS does that if you use mmap or similar APIs. What does make the shared cache special is that it can also share physical page tables between tasks.
- The discussion about “shared address space” is mixing things up.
The current 64-bit behavior is the same as the traditional 32-bit behavior: the lower half of the address space is reserved for the current user process, and the upper half is reserved for the kernel. This is typically called a shared address space, in the sense that the kernel page tables are always loaded, and only page permissions prevent userland from accessing kernel memory. Though you could also think of it as a 'separate' address space in the sense that userland and kernel stick to separate addresses. Anyway, this approach is more efficient (because you don't have to swap page tables for every syscall) and it's the standard thing kernels do.
What was tricky and unusual was the intermediate 32-bit behavior where the kernel and user page tables actually were completely independent (so the same address would mean one thing in user mode and another thing in kernel mode). This allowed 32-bit user processes to use more memory (4GB rather than 2GB), but at the cost of making syscalls more expensive.
Even weirder, in the same era, xnu could even run 64-bit processes while itself being 32-bit! [2]
- The part about Secure Enclave / Exclaves does not explain the main difference between them: the Secure Enclave is its own CPU, while Exclaves are running on the main CPU, just in a more-trusted context.
- Probably shouldn't describe dispatch queues as a "new technique". They're more than 15 years old, and now they're sort of being phased out, at least as a programming model you interact with directly, in favor of Swift Concurrency. To be fair, Swift Concurrency uses libdispatch as a backend.
[1] https://news.ycombinator.com/item?id=43599230
[2] https://superuser.com/questions/23214/why-does-my-mac-os-x-1...
> As of iOS 15, Apple even allows virtualization on iOS (to run Linux inside an iPad app, for example, which some developers have demoed), indicating the XNU hypervisor is capable on mobile as well, though subject to entitlement.
Apple definitely does not allow this; in fact the hypervisor code has been removed from the kernel as of late.
The MMU era used separate memory spaces to enforce security, but it's probably safer in the log run to actually have secure areas instead of 'accidentslly secure areas" that aren't that secure.
In the case of XNU and Darwin, a lot of the sources are also blog posts from reverse engineering efforts by security researchers and jailbreaking communities so it blurs the lines.
I have a hunch we'll get one more MacOS version with Intel support since they were still making Mac Minis and Pros with Intel chips in the first half of 2023.
Even Google on Android and ChromeOS is not exposing Rust to userspace, Java, Kotlin, C, C++, Javascript, Typescript, remain the official userspace languages.
Swift is not a replacemente for C and C++, but rather for Objective-C.
Do you have any sources for those claims?
Swift is not a replacement for anything; Apple will even say as much. It just fills the hole for a scripting language that they had for decades. Plenty of new Obj-C is still (and will continue to be) written for a long time.
> Swift was designed from the outset to be safer than C-based languages, and eliminates entire classes of unsafe code.
-- https://www.swift.org/about/
> Swift is a successor to the C, C++, and Objective-C languages. It includes low-level primitives such as types, flow control, and operators. It also provides object-oriented features such as classes, protocols, and generics.
-- https://developer.apple.com/swift/
"Introducing a Memory-Safe Successor Language in Large C++ Code Bases"
https://www.youtube.com/watch?v=lgivCGdmFrw
"So we feel pretty strongly, obviously at Apple our sucessor language is Swift and I am here to talk about features of Swift, both to try to sell you on to it, but also to talk about the things we think are pretty necessary and the ways in which a programming language can support, you know, clear code, safer and more correct code."
"Like I said before, Apple has always intended for Swift to be a sucessor language for all of our predecessors, from the top to the bottom of our stack, accessible to novices, powerful enough for experts, it is real a tall order.
From https://youtu.be/lgivCGdmFrw?t=1996 to https://youtu.be/lgivCGdmFrw?t=2042
"Swift as C++ Successor in FoundationDB"
But isn't Swift lower-performing than C, C++ or Rust?
Embedded Swift project also started as means to replace Safe C use cases at Apple, like iBoot firmware.
https://support.apple.com/en-jo/guide/security/sec30d8d9ec1/...
https://github.com/swiftlang/swift-evolution/blob/main/visio...
Once upon a time, C compilers were quite lousy versus handwritten 8 and 16 bit Assembly code.
I've been experimenting with similar approaches for documentation in open source projects, using knowledge graphs to link concepts and architectural decisions. The biggest challenge is always keeping documentation synchronized with evolving code.
Has anyone found effective tools for maintaining this synchronization between documented architecture and implemented code? Large projects like Darwin must have established processes for this.
Yes, it's called structure, discipline, and iterative improvement.
Keep the documentation alongside the code. Think in BSD terms: the OS is delivered as a whole; if I modify /bin/ls to support a new flag, then I update the ls.1 man page accordingly, preferably in the same commit/PR.
The man pages are a good reference if you already have familiarity with the employed concepts, so it's good to have an intro/overview document that walks you through those basics. This core design rarely sees radical changes, it tends to evolve - so adapt the overview as you make strategic decisions. The best benchmark is always a new hire. Find out what is it that they didn't understand, and task them with improving the document.
Managing management?
Code comments and documentation make no money, only features make money.
Bitter experience...
Software development is just one piece of the bigger picture that your manager is concerned with, same way as adding features is just one among your many responsibilities.
Managers understand risk. Communicate the risk of disregarding bugfixes, documentation, technical debt, even if it takes a lot of handholding. Expect no less from your manger: if they can communicate client expectations, budget constraints, and most importantly: long-term strategy, together you may be able to devise a better plan to address your own concerns.
In other words, empathy can go both ways.
And yeah, there are bad managers, just like there are bad coders. Sometimes it's an interpersonal problem. It's called life.