1. Fuchsia is a general-purpose operating system built to support Google’s consumer hardware.
2. It’s not designed to compete with or replace Android. Its goal is to replace Linux, which Android is built on. One big challenge it addresses is Linux’s driver problem. If Fuchsia succeeds, Android apps could run on it.
Fuchsia isn’t trying to replace Android. Its survival for over a decade—through layoffs and with hundreds still working on it—says a lot.
I can’t predict Fuchsia’s future, but it’s already running on millions of devices. The git logs show big strides in running Linux programs on Fuchsia without recompilation, as well as progress on the Android runtime. The best way to predict Fuchsia’s future is to look at its hardware architecture support and which runtime is getting attention.
Fuchsia’s success will likely depend more on market forces than on technical innovation. Linux is “good enough” for most needs, and its issues may not justify switching. The choice between sticking with Linux or moving to Fuchsia often favors Linux.
Still, I hope Fuchsia succeeds.
My understanding from them was, as much as I can remember it now, something like:
1. That yes, Fuchsia was originally intended, by at least some in senior leadership on the team, to replace both Android and ChromeOS. This is why Fuchsia had a mobile shell (or two?) at one point.
2. The Android team wasn't necessarily on board with this. They took a lot of ideas from Fuchsia and incorporated them into Android instead.
3. When Platforms were consolidated under Hiroshi it brought the Android and Fuchsia teams closer together in a way that didn't look great for Fuchsia. Hiroshi had already been in charge of Android and was presumed to favor it. People were worried that Hiroshi was going to kill Fuchsia.
4. Fuchsia pivoted to Nest devices, and a story of replacing just the kernel of Android, to reduce the conflict with the Android team.
4a. The Android team was correct on point (2) because it's either completely infeasible or completely dumb for Google to launch a separate competitor to Android, with a new ecosystem, starting from scratch.
To work around the ecosystem problem, originally Android apps were going to be run in a Linux VM, but that was bad for battery and performance. Starnix was started to show that Fuchsia could run Linux binaries in a Fuchsia component.
5. Android and ChromeOS are finally merging, and this _might_ mean that Android gets some of the auto-update ability of ChromeOS? Does that make the lower layer more suitable for Nest devices and push Fuchsia out there too?
Again, I was a pretty removed from the project, but it seemed too simplifying to say that Fuchsia either was never intended to replace Android, or always intended to replace Android. It changed over time and management structures.
I think the best way to look at it is like any software: there's Fuchsia The Artifact (thing that is made) and Fuchsia The Product (how thing is used, and how widely). I don't know anything about operating systems, but my understanding is that the engineers are very happy with Fuchsia The Artifact. Fuchsia The Product has had some wandering in the wilderness years.
Fuchsias underlying goals are to be a great platform for computing. This is distilled in its current incantation into a short tagline on fuchsia.dev: simple, secure, updatable, performant.
The details of how when and where Fuchsia might fit / gets exercised are nuanced and far more often about other factors than those which make great stories. Maybe there will be some of the good stories told one day, but that'll need someone from the team to finish a book and take it through the Google process to publish :D
In the meantime, here's Chris interview: https://9to5google.com/2022/08/30/fuchsia-director-interview...
Main difference to Linux: stable driver API. So vendors could make their blobs and support them easier, without open sourcing, like linux demands.
They are both significant for some companies in deciding whether to support Linux, but they are independent from each other. If Linux ever had a stable driver API, the GPL requirement would still cause some vendors to not provide drivers for Linux. If Linux dropped the GPL requirement (such as it is), but did not promise a stable driver API, some vendors would still choose not to provide drivers for Linux.
Updatable: as mentioned above the package system maintains a content-hard guarantee. It reduces cost of isolation and versioning using artifact sharing (as a result of content addressing) between versions where unchanged. Isolation properties of the design completely eradicate DLL hell. IPC interfaces in the core system provide stable APIs and are designed and reviewed with care to manage the evolution story and enable component interchange due to no invasive unmanaged coupling between components. Many components have been replaced with v2+ over the years with little to no fuss. The preferred deployment design follows an a/b update strategy making updates extremely safe.
Performant: most components in the system are medium sized in terms of responsibility as compared to common alternatives or common extremes. This optimizes for low latency for related task data locality and computation while maintaining a lot of separation between major components. While IPC has a latency cost, a lot of user facing performance amortizes well due to the implicit multiprocessing thah can result. All core components are written in fast systems languages. There’s always more to do on performance with a nascent system, but the architecture is setup in a generally good place for modern computing topologies and some of the implementations are decently strong for their age.
Simple: as it is a whole operating system and it has its own flavors of things, there may seem like a lot to learn which sounds complex, but many of the independent components and/or layers have simple interface boundaries with often fairly clear constraints and boundaries, which simplifies many aspects of understanding as a whole. Similarly the appropriately medium sizing of the kernel, of components, of interfaces with a generally “good taste” of sizing manages the numeracy complexity to a good level also. It’s quite easy to build associative memory for the system architecture and the components. Years on from when I worked on it I still have good recall for a ton of the system - hell I have a better memory for parts of fuchsia than the code bases I work on every day since I left.
This is like a textbook example of weak leadership of an executive team.
The power jockeying of a fiefdom’s chieftain (power reduction mitigation in this case) is allowed to drive the organizational structure and product strategy.
This is example of technically incompetent clueless leadership. That's why MBAs should not be in control in technical companies. They have ruined a product running on billion devices by replacing rock solid kernel with immature experimental project. And so on...
There’s the product side, and there’s the technical side. I’m not sure what qualifies as “senior leadership,” but when a VP finally stepped into the org, all the ambitious product ideas were cut, and we landed on Nest as a product. Until then, I believe it was mostly courtship and sales pitches (I wasn’t in the room).
Apologies if this comes across as harsh, but Fuchsia always seemed to suffer from poor product judgment. I think it’s only healthy to acknowledge that. That said, it’s not entirely surprising—good product people don’t tend to stay very long at Google.
Yes, there was a mobile shell briefly. There was also a desktop shell. I think the narrative you’re describing could be better framed as a series of attempts and experiments to see if Fuchsia could find a use case and client. That doesn’t necessarily mean there was a concrete plan that was seriously contended or scrutinized.
Just to clarify, these are my personal opinions and observations. I don’t actually know what went on behind closed doors, but I’m not sure it’s all that useful to know either, for the reasons I mentioned above.
The word meant is doing a lot of heavy lifting here. Meant - by who? The technology itself doesn’t want anything.
Do some people want to use wasm instead of JavaScript for websites? Yes. Will JS ever be removed from web browsers? Probably not, no. Wasm isn’t a grand design with a destiny it’s “meant to” reach. It’s actually just some code written by a bunch of people trying to solve a bunch of disparate problems. How well wasm solves any particular problem depends on the desires and skills of the people in the room, pushing the technology forward.
It’s kind of like that for everything. Rust was never meant to be a high performance systems language by its original creator. But the people in the room pushed it in that direction. Fuscia could replace Linux in Android. I’m sure some people want that to happen, and some people don’t. There’s no manifest destiny. What actually happens depends on a lot of arguing in meeting rooms somewhere. How that turns out is anyone’s guess!
The people who created the project and who are writing the code, obviously. This is clear from the context; you don't need to nitpick stuff like this.
> Wasm isn’t a grand design with a destiny it’s “meant to” reach.
Yes it is. The destiny is being able to create dynamic websites with languages other than Javascript. The first step was Asm.js which allowed compiling other languages to Javascript. Then we got WASM which compiles them to a binary format instead. But you still need some Javascript glue to interact with the DOM APIs. And now there are extensions in progress that will remove that requirement (GC, reference types etc).
> Rust was never meant to be a high performance systems language by its original creator.
Yeah citation needed. The very first compiler release already described it as "a strongly-typed systems programming language with a focus on memory safety and concurrency."
https://web.archive.org/web/20130728230358/https://mail.mozi...
Even before that the website described it as "a programming language for low-level, safe code."
https://web.archive.org/web/20110924054534/http://www.rust-l...
> The people who created the project and who are writing the code, obviously. This is clear from the context; you don't need to nitpick stuff like this.
This isn't a nitpick, it's an important point.
Even on a small project, but especially at a huge company, there will be different ambitions and motivations for doing things.
ie, one person wants Fuchsia to eventually take over all of Google's OSes, another wants a secure IoT OS, another just wants a cool research project to pursue ideas about OSes they've had since grad school, etc...
Active State had plugins to run Python, Tcl and Perl on the browser for example.
This distinction really seems to matter to some people. I suppose there’s something tribal about it. Is rust here to destroy C++? Rust gets a lot of irrational hate in the C++ community, and I think this perception is the reason. Is Fuscia here to destroy Android? To some, this will be a very emotionally important question.
> Yeah citation needed. The very first compiler release already described it as "a strongly-typed systems programming language (…)”
This is the article I’m thinking about, titled “The rust I wanted had no future”. Well worth a read: https://graydon2.dreamwidth.org/307291.html
> Performance: A lot of people in the Rust community think "zero cost abstraction" is a core promise of the language. I would never have pitched this and still, personally, don't think it's good.
Even in the form of fuchsia -> Android -> ChromeOS which publicly at least seems those layers are actually converging somewhat.
> One big challenge it addresses is Linux’s driver problem
Android devices have been plagued with vendors having out-of-tree device drivers that compile for linux 3.x, but not 4.x or 5.x, and so the phone is unable to update to a new major android version wit ha new linux kernel.
A micro-kernel with a clearly defined device driver API would mean that Google could update the kernel and android version, while continuing to let old device drivers work without update.
That's consistently been one of the motivating factors cited, and linux's monolithic design, where the internal driver API has never been anything close to stable, will not solve that problem.
A monolithic kernel with a clearly defined device driver API would do the same thing. Linux is explicitly not that, of course. Maintaining backwards-compatibility in an API is a non-trivial amount of work regardless of whether the boundary is a network connection, IPC, or function call.
Maybe, but I doubt it. History has shown pretty clearly that driver authors will write code that takes advantage of its privilege state in a monolithic kernel to bypass the constraints of the driver API. Companies will do this to kludge around the GPL, to make their Linux driver look more like the Windows driver, because they were lazy and it was easier than doing it right, and for any number of other reasons. The results include the drivers failing if you look at the rest of the system funny and making the entire system wildly insecure.
If you want to a driver not subject to competent code review abide by the terms of the box in which it lives, then the system needs to strictly enforce the box. Relying on a header file with limited contents will not do the job.
Well, your job is shipping the driver. If the API is limited and/or your existing drivers in Windows or other OSs do something and the linux driver doesn't then you have a problem
Linux kernel pros: it evolves organically
Linux kernel cons: it evolves organically
If you are running some generic PC hardware to write this comment, chances are that at least one tiny part somewhere which is dependent on some specific obscure timing to come up properly which just happens to work because someone inserted a small delay somewhere.
The same printer works with Linux the same it has always done. But that's not Windows' fault or to Linux's credit. It's just the result of a really crappy vendor. However the Linux API is generally more stable than people give it credit for, it's just that it's the C headers only. Anything compiled for a kernel generally won't work anywhere else, and that's not what the crappy vendors wants.
In the embedded space, including most mobile phone vendors which seemed to be an important use case for Fuchsia, even reputable vendors will generally give you an image of an operating system heavily modified to work with their hardware. That's their "driver". Imagine buying a new PC and receiving a DVD with a Windows modified to work on that hardware only. Of course you can't upgrade or even patch security issues beyond what the vendor will give you! You're supposed to buy new hardware. Sure, you could extract the drivers and try to install them on a vanilla Windows, and that's exactly what projects like LineageOS do, but most users won't bother.
That's the situation with phones. It's not at all clear why Fuchsia thinks they could solve this. It's a cultural and an economical problem and can't be fixed with software alone. Why would phone manufacturer care about your microkernel architecture? They will just patch the whole operating system, binaries and everything, until it boots enough to start the GUI and ship that. Just like they always have.
The only thing that could improve this situation is by enforcing the GPL, or have similar contractual stipulations like only being allowed to ship a reference implementation unmodified, but Google shows no interest in doing that. They care about getting Android on as many devices as possible with no regard to their respective quality or product longevity.
https://en.wikipedia.org/wiki/Windows_Driver_Model
https://en.wikipedia.org/wiki/Windows_Display_Driver_Model
Naturally, as you point out, it can do nothing against forced obsolence if the the OEM expliciltly tied the driver to check the Windows version.
However, if you discover that the box was insufficient at any point, you have to choose between changing the box (and breaking some perfectly good drivers), or leaving the insufficient box in place. API versioning can let you delay this decision to reduce pain, but it will happen at some point.
FWIW, I'm hugely in favor of microkernels, but they are a lead bullet (which we need lots of), not a silver bullet for these sorts of problems.
What's preventing adding a driver compatibility layer over the unstable API? I would accept a some performance cost and NOPs for new features using a shim for existing legacy binary drivers compared to being forced to junk functioning hardware.
Android using the absolutely most head or tip version of the Linux kernel sounds like a QA nightmare of its own.
Mobile SOC has to have everything to start up the phone, as there is no bios like system that the driver is kind work through. Maybe this is a problem that could be solved, but it hasn't been yet.
The same happens in PC land with laptops, you seldom get drivers from Microsoft for laptop specific components, those come from the OEM, and get you what you get.
For example, https://download.lenovo.com/eol/index.html
(Historically, that's one big reason that there's lots of Android phones that get a fork of whatever release was current some months before they shipped, and never get substantial updates.)
So many GPL violations in the Android world currently
1. Control. It's pretty awkward if your main product depends on an open source community who might say "no" (or "fuck off you worthless imbecile") to half the things you want. You'll end up with a fork (they did!) which has serious downsides.
2. Stable driver ABI.
3. Modern security design. A microkernel, and Rust is used extensively.
https://siliconsignals.io/blog/implementing-custom-hardware-...
That's why you used to not touch vendor partition when flashing a custom ROM etc..
Microkernels provide nice secure API boundaries and optimizations to reduce performance impact when crossing them on modern CPUs.
The monolithic design forces you to stay in either user or kernel mode as much as possible to not lose performance. Adding the API and ABI incompatibility makes it near impossible to maintain.
It will require a hard fork of Linux, which won't be Linux anymore. Monolithic design is the artifact of low-register-count CPUs of the past. If you are going to create a hard fork of a kernel, why not use a more modern design anyway?
Hell, a while back one of the kernel devs was actively screaming for hardware manufacturers to reach out to them so folks could work with the manufacturer to get drivers for their products into mainline. There was even a website and nice instructions on what to do and who to contact... but I'll be fucked if I can find it anymore.
There's nothing nefarious going on... it's explicitly stated (and well-known) that the stable interface to Linux is the userspace interface. Inside the kernel, things are subject to change at any time. Don't want to have to work to keep up? Get your driver into mainline!
Maybe one could run a Fuchsia-like thing inside Linux and use Linux to provide the Linux userland ABI, but that might be challenging to maintain.
Says a lot about managing to cling onto another product as a dependency to save the team from cancelation. Gotta thank the Directors for playing politics well. (Dart also played that game.)
Does not say much about necessity. Won't be surprised if it gets DOGE'd away at some point.
Having tea leaves instead of a public strategy and roadmap is what's causing the FUD in the first place. Google probably has good reasons for not making any promises but that hedging comes with a cost.
Feels like a quirk that some of its originators are open source hackers that ended up with Fuchsia being published externally at all. Google definitely doesn't want to attract more killedbygoogle headlines for its experimental projects, and I haven't seen any public Fuchsia evangelization.
If your target platforms are your own smart displays and maybe replacing the Linux kernel in a stack that already doesn't use the Linux userspace, why would you want to spend effort supporting third parties while you're still working on fundamentals?
https://android.googlesource.com/platform/bionic/ (cf. "What's in libc") / https://github.com/GrapheneOS/platform_bionic/tree/15/docs
When I was doing performance work on the platform one of the notable things was how slow some of the message passing was, but how little that mattered because of how many active components there are computing concurrently and across parallel compute units. It'd still show up where latency mattered, but there are a ton of workloads where you also basically hide or simply aren't worried about latency increases on that scale.
A counter case though, as an example, is building the system using a traditional C-style build system that basically spams stat(2) at mhz or these days ghz speeds. That's basically a pathological case for message passing at the filesystem layer, and it's a good example of why few microkernels which aimed at self-hosting made it over the line. It's probably possible to "fix" using modern techniques, but it's much easier to fix by adjusting how the compilation process works - a change that has major efficiency advantages even on monolithic kernels. Alas, the world moves slow on these axes, no matter how much we'd rather see everything move all at once!
Edit, to explain: in ios, everything revolves around mach ports, which are capabilities.
https://docs.darlinghq.org/internals/macos-specifics/mach-po...
At one level, it proves the model. The shame is that Mach otherwise has kind of not taken off. Gnu the OS was going to be Mach at the core, at one point IIRC
[1] https://source.android.com/docs/security/app-sandbox#protect...
[2] https://www.csoonline.com/article/3811322/iphone-users-targe...
[1] https://www.ise.io/wp-content/uploads/2017/07/apple-sandbox....
All app authors must comply with the rigorous App Store guidelines so the technical limitations on what they can do don't really matter anyway.
Add sideloading as an option and we'll get to see how secure iOS really is.
GrapheneOS is definitely secure enough to tolerate sideloading, but I don't know if iOS is or not.
Also, I would be interested to see a comparison to the wasm component model as it also seems to want to do the same things docker containers do.
[0] https://www.theverge.com/2021/8/18/22630245/google-fuchsia-o...
It has been under heavy heavy development for many years now.
The fact that they are now starting to talk about it publicly now is probably a sign that they are looking to move beyond just IoT in the future.
For example, I know it’s coming to Android (not necessarily as a replacement but as a VM) and I know there is some plans around consolidating ChromeOS and Android as well. I expect that is also going to be another place we might see it before too long.
I know they are also working on a full Linux compatibility layer called Starnix [1] as well where I believe the goal is you can just run all your Linux workloads on Fuchsia without any changes is the goal AFAIK so you can probably extrapolate from there that the end state is going to be roughly in line with anywhere Linux runs currently is a good potential fit for Fuchsia and it will come with a lot of additional security guarantees on top of it that is going to make it particularly attractive.
[1] https://fuchsia.dev/fuchsia-src/concepts/components/v2/starn...
If this were something that they were planning on using mainly for internal stuff, like for some sort of competitive advantage in data centers or something, I could understand the radio silence on future plans, but it's hard for me to imagine that's their main purpose when they're publicly putting it on stuff like the Nest Hub and Chromebooks (they didn't sell any with it afaik, but they published a guide for putting it on them). It really feels like they just don't know exactly what to do with it, and they're trying to figure that out as well. As for ChromeOS and Android, those already feel like a pretty good example of them not having a super clear initial product strategy for how they overlap (and more important, how they _don't_), so while having some sort of consolidation would make sense, it's not clear to me how Fuschia would help with that rather than just make things even murkier if they start pushing it more. I'd expect that consolidating them would start with the lower-level components rather than the UI, and my understanding is that Fuschia (as opposed to Zircon, which is the kernel) has quite a lot of UI-related stuff in it specifically with Flutter. I'm not saying you're wrong, since it sounds like you might have more relevant knowledge than me, but I can't help but wonder how much of this has really been planned in the long term rather than just been played by ear by those with decision-making power.
Fuchsia is not itself a consumer product, it's an open source project meant to be used to build a product. There is no application runtime for app developers to care about or UI for an end user to see. It would be strange to talk about things like mesa or the Linux kernel the way you are talking about fuchsia. There are software layers it does need to integrate with, but unless you work on those things, it's not really interesting to you.
Companies don't really discuss products they build using these open source building blocks while contributing to those projects until after the product launches either. It shouldn't really matter where and how it gets used to the end consumer, only that when it is used there are tangible benefits (more stable, less security problems, etc). I don't really understand why folks are so keen to understand what internal plans for using it may or may not be.
The difference is that those aren't entirely funded and developed by a single a for-profit entity that presumably expects some sort of net positive results from the effort spent on them in the future. To be clear, I don't consider anything I said in my previous comment to be a reflection of whether Fuschia is useful or whether it has any technical merits; my commentary is intended to be entirely scoped to Google's _intent_ for Fuchsia, which is what I read the part of the top-level comment that the parent comment I responded to directly to be discussing.
It's certainly possible that you're correct that Google has had plans for Fuschia this whole time and didn't discuss them because they didn't think it was relevant, but I guess I just don't find that convincing enough to change my mind about what I perceive to be going on.
> It shouldn't really matter where and how it gets used to the end consumer, only that when it is used there are tangible benefits (more stable, less security problems, etc). I don't really understand why folks are so keen to understand what internal plans for using it may or may not be.
This is probably just a matter of differing personalities. I don't think I have any explanation for why I'm curious about the topic, but I just am. I think you could make any number of similar statements about not understanding why people find certain things interesting (sports, video games, celebrity gossip, etc.), and you wouldn't be wrong or right; in my experience, it's not a personal choice to decide to find something interesting or not.
You're reading too much into a conference presentation.
The team has been allowed to make conference presentations for many years, it's just that most folks haven't wanted to put in the personal effort. A few have in the past, one I know of was Petr: https://www.youtube.com/watch?v=DYaqzEbU0Vk
I would bet, very, very, many dollars it is not coming to Android in any form, Starnix isn't coming soon if ever, and they're not looking to move beyond IoT. Long story short, it shipped on the Nest Hub, didn't get a great rep, and Nest Hubs haven't been touched in years because they're not exactly a profit center.
Meanwhile, observe Pixel Tablet release in smart display factor, Chrome OS being merged with Android, and the software-minded VP who championed the need for the project, moving on, replaced by the hardware VP.
When you mash all that together, what you get is: the future is Android. Or, there is no future. Depending on how you look at it.
(disclaimer: former Googler on Pixel team, all derivable from open source info. I wish it wasn't the case, but it is :/ https://arstechnica.com/gadgets/2023/01/big-layoffs-at-googl... https://9to5google.com/2023/07/25/google-abandons-assistant-... https://9to5google.com/2024/11/18/chrome-os-migrating-androi..., note 7d views on starnix bugs, all 1 or 0, with the exception of a 7 and 4 https://issues.fuchsia.dev/issues?q=status:open%20componenti...)
* had to give it up? TL;DR: A key part of my workflow was being able to remote desktop into a Linux tower for heavy builds. Probably could have made it work anyway, obviously you wouldn't try building Android on a laptop, but a consumer app would be fine. I left to try and pick up some of the work I saw a lot of smart people do towards something better. And monetizing that in the short-term requires supporting iOS/macOS, which only compile on Mac
* My knowledge is a couple years old at this point and I haven't kept up with recent developments so maybe the future is brighter than I think.
But the historical perspective is that Starnix is a relatively recent addition to Fuchsia. Even though Fuchsia is roughly 10 years old now, Starnix has only been useful for about 2 years (RFC 4 years ago)
Before Starnix came along to help run Linux apps, as you said, “There isn't really much software that has been ported to run on fuchsia natively”. Because POSIX Lite wasn’t / isn’t being used much. So I guessed the OP could have been thinking about that. But who knows.
What do you think the answers to those are?
https://github.com/vsrinivas/fuchsia/blob/main/LICENSE
OpenWRT was born because companies were forced to give the source code back to users.
Linux folk are familiar with working with file descriptors--one just writes to stdout and leaves it to the caller to decide where that actually goes--so that was the example used but it seems like this sort of thing is done with other resources too.
It looks like a design that limits the ways programs can be surprising because they're not capable of doing anything that they weren't explicitly asked to do. Like, (I'm extrapolating here) they couldn't phone home all sneaky like because the only way for them to be able to do that is for the caller to hand them a phone.
It's got strong "dependency injection" vibes. I like it.
The main benefit is that kernel space is drastically smaller which means that the opportunity for a kernel-level exploit is minimal vs something like the Linux kernel that a single device exploit compromises your entire machine.
You don't need to give a process/component the “unrestricted network access capability” -- you could give it a capability to eg “have https access to this (sub)domain only” where the process wouldn't be able to change stuff like SSL certificates.
EDIT: and to be clear, fuchsia implements capabilities very well. Like, apart from low-level stuff, all capabilities are created by normal processes/components. So all sorts of fine-grained accesses can be created without touching the kernel. Note that in fuchsia a process that creates/provides a capability has no control on where/to who that capability will be available -- that's up to the system configuration to decide.
Also imagine you are trying to run a browser. It’s implicitly going to be able to perform arbitrary network access and there’s no way you can restrict it from phoning home asides from trying to play whackamole blocking access to specific subdomains you think are it’s phone home servers.
That’s why I said “semantic” capabilities aren’t a thing and I’m not aware of anyone who’s managed to propose a workable system.
Of course you can!
With capabilities you can tell a program: "if you want to communicate with the external world here's the only function you can use :
`void postToMySubDomainSlashWhatever(char* payload, size_t size)`
But don’t lose the perspective on the benefits of such an architecture. Considering the networking access example:
* If your process gets compromised it won’t be able to access the attacker’s C&C server. It wouldn’t have access to any other stuff that the process didn’t already have for that matter.
* You wouldn’t be able to use http. It would be https only.
* Your process wouldn’t need a lib to talk HTTP. It would just need to talk the IPC protocol (whose wire-format and related details are standardized in Fuchsia which allows for the binding code for (de)serialization to be auto-generated).
* You wouldn’t be able (for better or worse) to mess with SSL certificates, proxies, DNS resolution, etc.
Consider another example -- file access. Say your app wants to access a photo. It doesn’t have access to the filesystem nor to the user’s folders -- it only has access to, say, an “app framework services” capability (eg. services from an UI-capable OS like Android) whose one of the “sub-capabilities” is requesting a photo. When your app does that request the ‘system’ opens a file selection GUI for the user to pick a photo. Note that the photo picker GUI is running on a different process and your app doesn’t know and can’t access anything about it. All that matters is that your app will receive an opened file handle in the end. The opened file handle is a capability as well. The file handle would be ready-only and wouldn't need to actually exist in any file system. In this example, before handing the file descriptor to your app, the “system” (or whatever process is implementing the ‘photo-picking’ capability) could process the image to remove metadata, blur faces, offer the user to edit the image (and maybe actually save it to a persistent store), log that access for reviewing later, etc.
(We already have something kinda similar in Android, but the implementation is not from first principles, so it’s very complex and prone to issues (requires an obviously non-POSIX Android userspace using lots of security features from the Linux kernel to sort of implement a microkernel/services like architecture)).
EDIT: adding the detail that Fuchsia's IPC lang ecosystem has autogen features due to its standardization.
All I said was that capabilities don’t solve the spyware problem and they largely don’t. They help protect you write software that itself can’t be hijacked to become uncontrolled spyware due to a compromise but if I am selling you software with “malware” bundled you’re going to have a hard time isolating the functional and “malware” bits (malware here being defined as software against the users wishes and intents).
You’ve extolled the benefits of it and they’re great and I think I largely agree with all of that, but it’s completely irrelevant to my initial point that it’s not a silver bullet for the vendor intentionally bundling malware into the code they’re distributing.
For many people it's just extra friction in search of a use case.
Maybe once those harms are all grown up, we'll find that fancier handcuffs for our software is worth a bit more than "just extra friction."
Sure, a web browser that needs to open arbitrary network connections can be built to phone home. But nearly none of the components it’s built out of can. The image decoding and rendering libraries can’t touch the network, the rendering engine can’t touch the network, and nor can the dozens of other subcomponents it needs to work.
Your installed editor extensions can’t phone home even if the editor itself can. Or perhaps even the editor itself wouldn’t be able to, if extensions are installed out of band.
Your graphics driver vendor can’t phone home, your terminal can’t phone home, and on and on and on.
A solution doesn’t have to be perfect for it to be an improvement, so stop acting like it does.
Anyway, you’ve just proven my point with “install extensions out of band” - you’ve ceded that it’s a losing position technically and are arguing for alternative UX solutions. I’m not pretending it has to be perfect. Like I said, capabilities are great for creating a secure OS and writing more secure software more generally. But the threat model it’s protecting against is not software that phones home but against the size of the exploit opened up from a compromise.
Think about it this way, Android apps and iOS apps are largely sandboxed through a primitive capabilities system already, not super fine-grained capabilities but still the same concept. Would you care to claim that privacy and malware isn’t a problem on these systems or that the permissions model has meaningfully curtailed anything but the most egregious of problems?
Secondly, the editor also does it this way this because of reasons other than support within the OS because even with components it would need to design a capabilities model for extensions and a sandbox process to maintain the permissions - it’s much easier to just do the extensions in-process and not think about it.
The difference is in that every single one of those other operating systems, applications just have network access. By default. No capability needed. This would not be the case in an OS centered around capabilities.
Multi-platform software develops integration with local OS APIs all the time.
Like I said, your thinking is way too black and white. Your inability to see a different world doesn’t make one impossible to exist. What is even the point of thinking this way? Your entire mindset boils down to “nothing can ever be better”.
I’d ask you refrain from personal attacks. That isn’t a fair characterization of what I said. All I said is that capabilities fundamentally doesn’t solve the phone home problem in many real world cases. I also highlighted that there are very real economic forces that must be accounted for in terms of understanding why software is architected the way it is. And no, there’s very few applications that have a completely different architecture per major platform which is what we’re talking about with capabilities. That’s very different from abstracting some platform-specific APIs here and there.
* Capability-centric design
* Single machine scope
* Tree of sandboxes
* Weaker inter-sandbox fault tolerance
* Standardized IPC system
* Model powers low-level OS features
* More detailed inputs/outputs from sandbox
* Configuration and building in separate files
* Sandboxes can encapsulate other sandboxes
If it’s anywhere close Google might be sat on a huge opportunity to tread the same ground while solving the ergonomic issues that NixOS has. (I’ve never been more happy with a distro, but I’ll admit it took me months to crack)
They are working on some components/layer to run things from Linux, but you would not expect all things built to work directly or as well as thing designed from the get-go for Fushia in mind.
I meant a little more in the way that software is packaged and run. My understanding is that theres a similar mechanism for storing and linking shared libraries that means multiple versions can go exist and be independently linked depending on the requirements of the calling package.
Some have suggested Fuchsia was never intended to replace Android. That's either a much later pivot (after I left Google) or it's historical revisionism. It absolutely was intended to replace Android and a bunch of ex-Android people were involved with it from the start. The basic premise was:
1. Linux's driver situation for Android is fundamentally broken and (in the opinion of the Fuchsia team) cannot be fixed. Windows, for example, spent a lot of time on this issue to isolate issues within drivers to avoid kernel panics. Also, Microsoft created a relatively stable ABI for drivers. Linux doesn't do that. The process of upstreaming drivers is tedious and (IIRC) it often doesn't happen; and
2. (Again, in the opinion of the Fuchsia team) Android needed an ecosystem reset. I think this was a little more vague and, from what I could gather, meant different things to different people. But Android has a strange architecture. Certain parts are in the AOSP but an increasing amount was in what was then called Google Play Services. IIRC, an example was an SSL library. AOSP had one. Play had one.
Fuchsia, at least at the time, pretty much moved everything (including drivers) from kernel space into user space. More broadly. Fuchsia can be viewed in a similar way to, say, Plan9 and micro-kernel architectures as a whole. Some think this can work. Some people who are way more knowledgeable and experienced on OS design seem to be pretty vocal saying it can't because of the context-switching. You can find such treatises online.
In my opinion, Fuchsia always struck me as one of those greenfield vanity projects meant to keep very senior engineers. Put another way: it was a solution in search of a problem. You can argue the flaws in Android architecture are real but remember, Google doesn't control the hardware. At that time at least, it was Samsung. It probably still is. Samsung doesn't like being beholden to Google. They've tried (and failed) to create their own OS. Why would they abandon one ecosystem they don't control for another they don't control? If you can't answer that, then you shouldn't be investing billions (quite literally) into the project.
Stepping back a bit, Eric Schmidt when he was CEO seemed to hold the view that ChromeOS and Android could coexist. They could compete with one another. There was no need to "unify" them. So often, such efforts to unify different projects just lead to billions of dollars spent, years of stagnation and a product that is the lowest common denominator of the things it "unified". I personally thought it was smart not to bother but I also suspect at some point someone would because that's always what happens. Microsoft completely missed the mobile revolution by trying to unify everything under Windows OS. Apple were smart to leave iOS and MacOS separate.
The only fruit of this investment and a decade of effort by now is Nest devices. I believe they tried (and failed) to embed themselves with Chromecast
But I imagine a whole bunch of people got promoted and isn't that the real point?
The slide with all of the "1.0s" shipped by the Fuchsia team did not inspire confidence, as someone who was still regularly cleaning up the messes left by a few select members, a decade later.
I worked on the Nest HomeHub devices and the push to completely rewrite an already shipped product from web/HTML/Chromecast to Flutter/Fuchsia was one of the most insane pointless wastes of money and goodwill I've seen in my career. The fuchsia teams were allowed to grow to seemingly infinite headcount and make delivery promises they could not possibly satisfy -- miss them and then continue with new promises to miss --while the existing software stack was left to basically rot, and disrespected. Eventually they just killed the whole product line so what was the point?
It was exactly the model of how not to do large scale software development.
Fuchsia the actual software looks very cool. Too bad it was Google doing it.
like mobile, servers, desktops, tablets?
Main goal would be to replace the core of AOSP considering the main work that's being done, but it seems like Google isn't convinced it's there yet.
In terms of impact or business case, I'm missing what the end goal for the company or execs involved is. It's not re-writing user-space components of AOSP, because that's all Java or Kotlin. Maybe it's a super-longterm super-expensive effort to replace Linux underlying Android with Fuchia? Or for ChromeOS? Again, seems like a weird motivation to justify such a huge investment in both the team building it and a later migration effort to use it. But what else?
Many things that google did when I was there was simply to have a hedge, if the time/opportunity arose, against other technologies. For example they kept trying to pitch non-Intel hardware, at least partly so they could have more negotiation leverage over Intel. It's amazing how much wasted effort they have created following bad ideas.
They seemed to have unlimited headcount to go rewrite the entire world to put on display assistant devices that had already shipped and succeeded with an existing software stack that Google then refused to evolve or maintain.
Fuchsia itself and the people who started it? Pretty nifty and smart. Fuchsia the project inside Google? Fuck.
> the idea was to come up with a kernel that had a stable interface so that vendors could update their hardware more easily
interesting... if that was a big goal, i wonder why they didn't go with some kind of adapter/meta-driver (and just maintain that) to the kernel that has a stable interface.maybe long-term not viable i guess...?
Also note that swapping the core of a widely used comercial OS like AOSP would be no easy feat. Imagine trying to convince OEMs, writing drivers practically from scratch for all the devices (based on a different paradigm), the bugs due to incompatibility, etc.
(IIUC, it's brand new?)
If your team is too large, and especially if you don't know what the use case is, it can take a very long time. You asked for general purpose and fully capable, so you're probably in this case, but I think the desired use cases for Fuchsia could be scoped to way less than general purpose and fully capable: a ChromeOS replacement needs only to run Chrome (which isn't easy, but...), and an Android replacement needs only to run Android apps (again, not easy), and the embedded devices only run applications authored by Google with probably a much smaller scope.
But it also depends on what 'from scratch' means. Will you lean on existing development tools, hosted on an existing OS? Will you borrow libraries where the scope and license are appropriate? Are you going to build your own bootloader or use an existing one?
The answer is not much time. The real question is how long to develop good quality drivers for a give platform (say, an x64 laptop)? How long to port/develop applications so that the OS is useful? How long to convince OEMs, app developers and such folks to start using your brand new OS? It's a bootstrap problem.
That would be surprising. Where do you get that? I don't mean toy OSes or experiments. Linux, MacOS and Windows are still in development and I can't imagine the number of hours invested.
> they use existing libraries and the like
Where can I find out about that? Thanks.
It's not like Fuschia was supposed to be a "fully capable OS developed from scratch", either? I mean it's "just" the kernel and other low level components, most of the software stack would remain same as Android/Linux at least for the time being.
Ok, I'll bite. If we're talking classic Macintosh OS, perhaps.[0] macOS? No way. The first Mac OS X was released in 2001, and was in development between 1997 and 2001 according to Wikipedia.[1] But the bulk of the OS already existed 1997. Mac OS X was a reskin of NeXTStep. NeXTStep was released in 1989, final release 1995, final preview 1997 (just before Apple sold out to NeXT).[2] NeXTStep was in production for quite some time before the x86 version shipped (around '95 from memory). In case you are wondering, I can assure you that NeXTStep was a very capable OS. NeXTStep was in development for a couple of years before the first hardware shipped in 1989. NeXTStep was built on top of Mach and BSD 4.3 userspace. Mach's initial release was 1985.[3]. Not sure how long the first release of Mach took to develop. You can check BSD history yourself. But I'd say, conservatively, that macOS took at least 14 years to develop.
[0] check https://folklore.org/
[1] https://en.wikipedia.org/wiki/Mac_operating_systems
If you mean the early 1980s OS, that is not comparable. It probably ran in something like 512K of memory off of a 5.25" floppy disk (or a tape?).
> It's not like Fuschia was supposed to be a "fully capable OS developed from scratch", either? I mean it's "just" the kernel and other low level components
I don't know the answer, but doesn't the second sentence describe Linux?
In comparison, the Lisa OS required 1MB RAM and a 5MB hard disc, hence the eye watering $10,000 introductory price.
Development on the Mac apparently started in 1979, and release in 1984 although the early Jeff Raskin era machine was quite different to the final Steve Jobs led product.
Fuchsia more likely was for all the stuff that Google kept experimenting with using Android just because it was there rather than because it was a good fit - wearables, IoT, AR/VR, Auto, etc...
Why would Android be a poor fit for those?