So it's basically a SteamOS sibling, just without Steam?
That might be their target audience.
What appeals to me about linux is the hackability and configurability. This takes it all away in some way, but that's not to say that they won't find a market for it.
Linux is wonderfully flexible, which allows to create distros like that, among other things. Linux is also free as in freedom, which may be very important for trusting the code you run, or which a governmental official runs.
I bet that past the alpha stage they will offer a configuration tool to prepare the images to your liking, and ways to lock the system down even more. Would work nicely over PXE boot.
> KDE is a huge producer of software. It's awkward for us to not have our own method of distributing it
The idea of a distribution for this specific purpose is best left in the hands of some organization with experience with this specific purpose, not KDE whose experience is developing desktop environments.
How exactly is it “awkward” for them and how exactly does distributing this in any way improve the development process of KDE? They can't even dogfood it obviously.
Seriously. That's the reason that fax is still popular in the medical industry.
1. You get way less faxes than emails.
2. Faxes can't steal credentials.
3. You should be auditing expenses anyway.
I am not a fan. It’s a big outage waiting to happen. It’s an enormous data breach waiting to happen. It will inevitable be enshitified.
If the government had already thought about this in advanced (even in 2013 when doctolib was just starting out), then there could be very strong protectiosn for data which would then allay all of these concerns, and we might have had multiple players in this space.
The best use of Doctolib for me is that I can make appointments without having to speak perfect German on phone. I can make appointments in evening when I'm back from office and can relax a little bit. So, doctolib is a godsend for me as an immigrant here. and I'm guessing for a lot of people too. I can look up doctors who are available without having to bother the receptionist. This is much more efficient way of doing things.
What's more, this is a sensitive and regulated field, where trust is essential. They can't afford to mess around if they don't want to quickly find themselves subject to moe restrictive regulations.
They were heavily criticised in France because they allowed charlatans and people with no medical training to register (particularly for Botox injections). As soon as this became known, they quickly rectified the situation.
That's ridiculous.
What's more, it's Google, so we're not safe from a ‘Lol, we're discontinuing support for Chrome OS. Good luck, Byeeee.’.
Some offices still have bad memories of Google Cloud Print, for example. I'm not saying that being an early adopter of a distribution that's less than a year old is a good solution. Just that Google's business products don't have a very good reputation.
ChromeOS Flex exists, it is free of charge, and it runs on more or less any x86-64 computer, including Intel Macs.
Nordic Choice got hit with ransomeware and rather than paying, just reformatted most of its client PCs with ChromeOS Flex and kept going with cloud services.
https://www.bitdefender.com/en-us/blog/hotforsecurity/nordic...
Sure it's less popular. It came in under 20 years ago, competing against an entrenched superpower that was already nearly 30 years old back then. It's done pretty well.
The Google Apps for Business bundle has outsold by far ever single FOSS email/groupware stack in existence, and every other commercial rival as well.
Notes is all but dead. Groupwise is dead. OpenXChange is as good as dead. HP killed OpenMail.
Their office buys their stuff from a supplier which ships them a Windows box with all the batteries included.
It is possible for somebody to make this into a workable bundle targeting specific professions/environments. A doctor would not care if double clicking X icon open an app through wine or not.
That's why doctors in my country still prefer legacy physical pen and paperwork, versus interactions with the modern digitized equivalents which are universally hated because they're not designed by doctors but by some consultancy who won the government tender.
Adding dealing with an unfamiliar OS and Wine on top of that is not the slam dunk you think it is.
The average user doesn't want (and shouldn't need) to understand technical stuff like file formats (JPEG vs. PNG), the data load of video streaming, what a "driver" is, etc. Forcing them to grapple with these concepts is a fundamental design failure, but I think it’s a difficult pill to swallow for nerds to accept that others just don’t care about these things.
This is why companies like Apple have been so successful: they don't just simplify the interface, they abstract away the complex, technical reality into a language and experience that feels intuitive and friendly for the users.
[EDIT] The core problem, in case the example didn't make it clear, is that these things interrupt a workflow they use often, and are accustomed to having always work the same way, and do so in service, usually, of showing them a bunch of stuff they don't give a fuck about and didn't really need to know. Even the ones that block interaction to highlight new features are really bad—OK, that's nice, but I'm trying to do the thing I always do with this and you're getting in my way, making my program temporarily behave and look weird and confusing, et c.
She has no conceptual understanding of what’s an app and a webpage and why they’re treated differently, she just kinda accepted she uses something called Firefox to do a search and some icon in the phone that has the exact name of the other app she wants to use. She never understood (or cared) what it means to “close” an app if she already does that when she presses home or back, no matter how much I try to explain.
When you think about it, it’s all very confusing for them, and since people making these things already understand them well, they make stuff assuming the users will understand the whole thing as well as they themselves do.
No other buttons (visible on the face, anyway) to confuse it for. It's right in comfortable reach of the thumb. "Which button do I push again? Oh right, there's only one."
(I also think going to "swipe up to unlock" instead of the brilliant slider they had before was a big mistake, as far as reducing the level of comfort for the median user)
But your POS system where you enter in orders? That's Linux. And guess what - it just works, it chugs along and does its thing.
There's no reason that doctors offices couldn't use software that utilizes Linux. And to pretend that windows is low maintenance? Tsk tsk, windows is a time bomb.
You can overlay changes to the read-only rootfs using the sysext mechanism. You can load and unload these extensions. This makes experiments or juggling debug stuff a lot easier than mucking about in /usr used to be.
A lot of KDE Linux is about making updates and even hackability safe in terms of making things trivial to roll back or remove. A goal is to always be able to unwedge without requiring a reinstall.
If you know you can overlay whatever over your /usr and always easily return to a known-good state, hackability arguably increases by lowering the risk.
Immutable distros just one-up that by trying to steer the system in a direction where it can work with a readonly rootfs in normal operation, and nudging you to take a snapshot before/after taking the rootfs from readonly to read-write. (openSUSE has you covered there as well, if that's your thing; it's called MicroOS).
Both of those distros use KDE by default, so the value-add of KDE having its own distribution is basically so they can have a "reference implementation" that will always have all the latest and greatest that KDE has to offer, and showcase to the rest of the Linux world, how they envision the integration should be done.
If I were to set up a library computer or a computer for my aging parents, I would choose openSUSE Leap Micro with KDE, as that would put the emphasis on stability instead.
If you already commit all your changes, anyway, what keeps you from using Nix and running one more command (`nixos-rebuild switch`)?
This is a major reason I ended up with https://getaurora.dev. I layer a few things, but it comes with bells and whistles (like NVIDIA drivers, if you need that).
I can't see myself going back to a "normal" distro. I don't want to spend time cosplaying a sysadmin, I have things to do on my computer.
It doesn't, though - as evidenced by my Steam Deck - it adds enough friction to make me not bother most of the time.
However, while I love the approach of having an immutable distribution, I don't see the attack vector of ransomware handled in a good way. It does not help, if your OS is intact, but your data is irrecoverably lost due to a wrong click in the wrong browser on your system.
I think the backup and restore landscape has enough tools to fix this (cloud + restic[2] or automated ZFS snapshots[3]), but it takes a bit time / a script to setup something like this for your parents in your favorite distro.
Looks like they used to, so they have removed the option.
Building your own is an option https://github.com/ublue-os/image-template
But I guess it is best to have the option that not to have it.
However, the only distro I could find where it actually worked was Chimera. Not the gaming-related ChimeraOS but the from-scratch LLVM-compiled all-static APK and Dinit distro with a hodgepodge userland ported from the BSDs.
It's rolling release though so it'll happily install the latest bugs. But it probably does that faster than any other distro.
This is more about preventing the user from messing up their computer than it is about data safety.
I've been using Bazzite for 2 years now (an immutable distro based on Fedora Silver blue) and I just love the fact that I can "unlock" the immutability to try something that could mess up my systemd or desktop environment, and I can just reboot to erase it all away.
I also have a github action to build my custom image with the packages I want, and the configuration I want.
And this makes adding a backup setup even easier, it can be baked-in the distro easily with a custom image ! Your grandparents don't have to do anything, it will auto update and auto apply (and even rollback to the n-1 build if it fails to boot)
Isn't the main point that you delegate curating and building the system image to the KDE project?
I hear you. The problem is, that basically nothing stops you from building anything yourself. The difference is, that there is no easy-to-use build-in solution (like time machine) and ease of use is what makes the difference. Especially a TIME difference. Of course there is software SIMILAR to time machine, but it seems to be hard to write something rock solid and easy-to-use.
In fact I also have built it myself: https://github.com/sandreas/zarch A script that installs Arch on ZFS with ZFSBootMenu and preconfigurable "profiles" which packages and aurs to use. Support for CachyOS Kernel with integrated ZFS is on my list.
I already thought putting together a Raspberry PI Image that uses SSH to PULL backups over the network from preconfigured hosts with preconfigured root public keys and is easily configurable via terminalUI, but I did not find the time yet :-) Maybe syncthing just is enough...
The phylosophy of security in "modern" OSs is to protect the OS from the user. The user is evil and, given so many rights, it will destroy the (holy) OS. And, user data ? What user data ? /s
THIS!
I was pondering putting Linux on my father's ancient machine (still running Windows7; or migrating him to something slightly newer but win10/win11 doesn't rub me the right way) but I was weary of "something wrong happening" (and I'm away right now).
And having immutable base would be awesome - if something goes wrong just revert back to previous one and voila, everything still works. And he would have less options to break something…
Innovation happens on stable foundations, not thru rug pulls.
Yes, you have the freedom to make your system unbootable. When Debian first tried to introduce systemd, I've replaced PID 1 with runit, wrote my own init scripts & service definitions, and it ran like this quite well, until... the next stable release smashed me in the face.
It's absurd how hackable the Linux distros are. It's also absurd to do this to your workhorse setup.
Immutable/Atomic Linux doesn’t take away any ability to hack and configure it. It’s just a different approach to package and update management.
There really isn’t anything you fans do with it that you can do on other Linux distros.
I’m using Bazzite which is basically in the Fedora Atomic family and all it really changes is that if I want to rpm install something and there’s no flatpak or AppImage then I just need to decide on my preferred alternate method to install it.
I find Bazzite’s documentation on the subject quite helpful: https://docs.bazzite.gg/Installing_and_Managing_Software/
At the very worst case I’m using rpm-ostree and installing the software “traditionally” and layering it in with the base OS image.
Now you might be thinking, what’s the benefit of going through all this? Well, I get extremely fast and reliable system updates that can be rolled back, and my system’s personalization and application environment is highly contained to my home directory.
I’m not an expert but I have to think that there are security benefits to being forced into application sandboxing as well. Applications can’t just arbitrarily access data from each other. This isn’t explicitly a feature of immutable/atomic Linux but being forced into installation methods that are not rpm is.
But some people just want a computer to work.
It's not like you can't try a simple distro and move on to something more complex later.
But I wouldn't use KDE for the typical cliched (grand)parents: it's just way too complicated for someone who's doesn't have high proficiency in tech.
Seems like a lot of effort and fanfare for such a niche market.
In fact, from what I understand it is in fact not really Gentoo based but Portage-based, as in they for the most part write their own ebuilds and software and from what I know have their own custom init system and display system that's not in Gentoo but they found that Portage was simply very convenient for automating their entire process. The claim that “gentoo is just Portage” is not entirely true, there's still a supported base system that's configured as offered by Gentoo but it's far more flexible than that of most systems of course, granting the user choice over all sorts of fundamental system components.
How's Flatpak doing in terms of health of the tech and the project maintenance?
Merely 4 months ago things didn't look too bright... [1]
> work on the Flatpak project itself had stagnated, and that there were too few developers able to review and merge code beyond basic maintenance.
> "you will notice that it's not being actively developed anymore". There are people who maintain the code base and fix security issues, for example, but "bigger changes are not really happening anymore".
Most distros have a fantastic track record of defending the interests of their users. Meanwhile, individual app developers in aggregate have a pretty bad one; frequently screwing over their users for marginal gain/convenience. I don't want to spend a bunch of time and energy investigating the moral character of every developer of every piece of software I want to run, but I trust that my distro will have done an OK job of patching out blatantly user-hostile anti-features.
It works really well, the one downside is that vscode extensions are pretty intrusive. They expect system provided SDKs. So you then have to install the SDKs in the Flatpak container so you have them. If vscode extensions were reasonable and somewhat sandboxed that wouldn't be a concern.
All that is to say, Flatpak works well for this purpose too.
So yes, if you install 1 KDE app from Flatpak, you will have the KDE runtime. But that is true if you install 1 KDE app while on Busybox as well. It's the subsequent KDE apps that will reuse the dependencies.
How many versions of openssl are on my Silverblue laptop? I honestly couldn't tell you.
That default options reads like this: “All files in one partition (recommended for new users)”
Two improvements that could be made: 1) Easy: put a brief Note in the installer indicating what might fill up the partitions quickly so people can have a heads-up, do a little research, and make a better decision. 2) moderate: still keep the Note, but also check the disk size and maybe ask which type of workload (server, development, home user), then propose something a bit more tailored.
```
/ / ext4 defaults 1 1
/home /home ext4 defaults,nosuid,noexec,nodev 1 2
/tmp /tmp ext4 defaults,bind,nosuid,noexec,nodev 1 2
/var /var ext4 defaults,bind,nosuid 1 2
/boot /boot ext4 defaults,nosuid,noexec,nodev 1 2
```
I don't think that's the best conclusion: these days, disk is cheaper than it has ever been, and that "foundational" 8 GB will serve all the Flatpaks you want. Installing apps from packages sprays the same dependency shit all over your system; Flatpak was nice enough to contain it, so you immediately noticed it.
Flatpak is a good idea.
my full / for a desktop debian with ton of stuff is under 4gb.
When installing just two apps, even if both are in the same (KDE or GNOME) realm, you can very easily end up with 8 flatpaks (including runtimes) or more. This is due to a variety runtimes and their versions: One for KDE or GNOME Platform release (about two a year) plus a yearly freedesktop base) and not all apps being updated to the latest runtimes constantly.
You then have to add at least 6 locale flatpaks to these hypothetical 8 flatpaks.
Especially with Debian, locales matter, of you don't do a `sudo dpkg-reconfigure locales` and pick what you need before installing flatpaks on a default install, you will get everything and thus gigabytes of translations you don't even understand, wasting your disk space.
Yeah leave this thing to die in peace.
In fact, this is what I've been doing in other distros, like Debian stable, nevertheless I have no real control of the few updates to the base system with side effects.
This is not the first immutable distro, but it comes from the people who develop my favourite desktop environment, so I'm tempted to give it a try. Especially as it looks more approachable than something like NixOS.
The atomic distro approach works a lot better for me. Would not go back to a "normal" distro from https://getaurora.dev.
I run Debian stable, and it's not immutable, but it is very unchanging. I don't worry much about system libraries and tooling.
The downside to that is that then userland application are out of date - in enters Flatpak. I run most GUI applications in flatpak. This has a lot of benefits. They're containerized, so they maintain their own libraries. They can be bleeding edge but I don't have to worry about it affecting system packages. I also get much simpler control - no fiddling with apparmor, the built-in Flatpak permission system is powerful enough.
The blind spot then is CLI apps and tooling. Usually it doesn't matter too much being bound to system packages, but if it really does, I can always containerize those too. I only do it for my PHP dev environment.
Do you encounter any friction getting different containerised tools to talk together? Can you compose them in the classical unix fashion?
I basically did the same with Tumbleweed for a couple of years. Can't stand the point release distros. Lagging behind a year or two when it comes to the DE is not something I fancy. Never liked Tumbleweed much though. Felt very unpolished when using Plasma.
> The blind spot then is CLI apps and tooling.
I can really recommend homebrew. Works well. apt is for the system, homebrew is for user facing CLI apps. :)
Apps are bundled and installed like they are on macOS, and there's a very strict distinction between literal 'System', 'Users' and 'Programs' directories.
> It means that users will have to build a custom system image or fiddle with FS overlays just to do system management tasks that are straightforward on all other systems.
What system management tasks? /etc and /var are usually writeable, which is all you need to configure the software on your system. Overlays are for installing new software on the base system, which is only really necessary for something like nvidia drivers because all other software is installable through other means (it's also usually a trivial process). Even if you don't want to use containers, you can use a separate package manager like Homebrew/Nix/Guix/Pacman/etc.
It requires a bit of a mental shift to adapt to if you're only used to traditional systems. It's kind of like the move from init scripts to systemd: it's objectively an improvement in all the ways that matter, but cultural/emotional push back is inevitable :)
If anything is not included in the base image, you have a few options:
1. use distrobox to install it in a container, and export the app to the desktop.
2. use rpm-ostree to install it as a layer. This is on the slow side, and will slow down weekly updates.
3. Make your own base image with what you want included. This is probably cumbersome and requires some infrastructure.
I have a few things in distrobox containers, things which aren't available as flatpaks. The biggest hurdle, for me, was getting wireshark running since the flatpak version can't capture traffic. I had to make a root distrobox container and export the app to my desktop. It works, but there were definitely some hoops to jump through.I like that updates come through once a week and they aren't applied until I reboot. If I have problems, it is easy to roll back to what I was running before.
I would be comfortable giving my parents an Aurora setup, knowing that they can't easily break it.
You could also build it from source, although that's definitely more work.
Really, I don't see a lot of difference between immutable desktop OSes and Android or iOS. That model is not necessarily a bad one when you're rolling out systems that you don't expect the user to need to fiddle with the system management tasks you refer to. If I have 1,000 laptops to manage for a corporate environment, say, or for non-technical users who are not going to fiddle with drivers but might want to install Inkscape (or not).
I guess immutable distros such as this one target people who don't need much customisation and mostly just need what's already there anyway.
But immutable OS are helping in progress some sandbox tools and allowing new workflows to manage the OS (virtualized or not).
End users should not have to do system management at that kind of low level. They should be able to focus on accomplishing what they actually want to do and not have to maintain the system themselves.
>you could address more effectively on traditional systems by saving a temporary FS snapshot
That's an implementation detail. Every modern OS uses essentially snapshots for A/B updates to avoid wasting storage space.
Desktop apps are all Flatpaks, including Steam.
Edit: This comment has been downvoted into the negatives? Did something change about HN culture?
Can recommend Bazzite, Bluefin and Aurora which are derived from Atomic Fedora but come with niceties like distrobox and NVIDIA drivers (if you need them).
In other words, with your requirements what are you still doing on Linux?
The other thing that worries me is that I've had a lot of trouble building software that mainly supports BSD from source on linux machines. I'm worried if I switch to BSD, a lot of the software I want won't be available in the package manager, which would be fine, but I'm worried that building from source will also be a pain and binary releases for linux will not be compatible. Sounds like a lot of pain to me.
I'd be happy to be corrected if these are non-issues though.
Docker is Linux-only for now but there is movement in that area. BSD had jails first and there is work on making generic containers work across multiple operating systems, including BSD. But I think the point of using BSD is to not bring Linux with you. Try their way and see how you like it.
Googling "bad operating system" returns useless results.
They could have been the “Build Always Distribution” (BAD)
It seems to be that a project like KDE might be in a very good position to make a very competitive distro simply because they are starting from the point of the user experience, the UI if you will. Think M$ windows, it IS GUI, and fully focused on how the user would use it (I'm thinking the days of XP and Win 7).
A KDE distro might be less encumbered with "X11 vs Wayland" or "flatpak vs <insert package manager name here>" discussions and can fully focus on the user experience that KDE/Plasma desktop brings!
I'm looking forward to take this for a spin!
"Well, we’re kind of cheating a bit here. A couple KDE apps are shipped as Flatpaks, and the rest you download using Discover will be Flatpack’d as well, but we do ship Dolphin, Konsole, Ark, Spectacle, Discover, Info Center, System Settings, and some other System-level apps on the base image, rather than as Flatpaks.
The truth is, Flatpak is currently a pretty poor technology for system-level apps that want deep integration with the base system. We tried Dolphin and Konsole as Flatpaks for a while, but the user experience was just terrible."
https://pointieststick.com/2025/09/06/announcing-the-alpha-r...
Strange design.
> Flatpak is currently a pretty poor technology for system-level apps that want deep integration with the base system.
Therefore they ship those apps on the base image, rather than as Flatpaks. I don’t see what’s wrong with this approach.
IMHO KDE delegates too much core functionality to apps. On macOS, I can press "space" while having a file selected and I get an instant preview. This sort of thing must not be delegated.
Which goes completely against the kind of immutable and sandboxed system that KDE Linux intends to be.
They are betting that Flatpack is the future, even if the present experience is subpar.
I'd also expect installing flatpaks offline would be a hassle.
Does this mean they're testing that all the Wayland bugs are fixed? I haven't updated to the new Debian stable quite yet but all the previous times I've switch to Wayland under promises of "it's working now" I've been burned; hopefully dogfood helps.
Wayland, KDE, and several other pieces of software evolve rapidly. What may be broken in one release will very likely be fixed a few releases after the last debian stable release.
I'll run Debian on a server if I need predictability and stability with known issues. I won't run Debian on a desktop or workstation for the same reason.
The only issue I have with software conservatism, like Debian, is that some new thing requires something newer. If you live in a world where you can do without the new thing, then it's really quite nice. Security patches are another matter, but are usually dealt with.
I like to be on the bleeding edge, but Debian was created for a reason. Only time can determine which configurations don't suck.
For some obscure reason, bugs are easier to produce than fixes. But the next release will be better. I promise.
I used to "hate" Wayland, but that was because I was stuck on an ancient kwin_wayland implementation that didn't get updated for years on Ubuntu.
When it comes to big changes like Wayland and Pipewire, you really want the latest versions you can get. Like the OP, I only use rolling releases on my machines for that reason.
I'm open to moving to Debian testing/unstable if Wayland can actually deliver. What do you run?
I've also been giving Bazzite to some non-tech people who have not once asked for help. That one is immutable and Wayland only, so it's a further testament to how far Wayland has come if you're on an up-to-date-enough system.
Sadly, I'm stuck on older Ubuntu for my work laptop because the mandated security software won't run on anything better.
> Even as of Ubuntu 24.04
I get that this is the current LTS release, but clearly this isn't want the parent poster had in mind. Notably 24.04 never shipped Plasma 6, which carried a lot of critical Wayland fixes.
I'm on an unholy amalgamation of Arch/Cachy/Endeavour now, but I have been using screen sharing nearly everyday on calls via Firefox on Arch for about a year and it's worked without a problem.
I considered Debian testing, and it does work well on servers, but a true rolling release is more convenient. The software release and update loop is shorter, it's really nice to be able to pull fixes to packages in a reasonable amount of time after they're released.
Ubuntu 24.04 is older than Debian stable currently.
You need to get portals working correctly for screensharing to work
can't share my screen from Firefox
Did you install Firefox manually or do you use the Firefox Snap that's provided by Ubuntu?I’m sitting with SW acceleration in the browser today because some update broke it. I have had it working in the past but I’ve had like 2-3 updates in the past 2 years break it.
And for what it’s worth there was a really bad tearing bug because of the argument over implicit and explicit synchronization that neither side wanted to fix. I think it’s since been addressed but that was only like in the past 6 months or something. So it’s definitely not been “years” since it’s been working seamlessly. Things break at a greater rate than X because X basically is frozen and isn’t getting updates.
With Arch, you have to read up ahead of time before updating software because it's a rolling release.
I remember one breaking change when I was switching from the previous Nvidia drivers to the new 'open' ones, but some breakge was expected with that change.
So it might make sense to avoid Wayland in that case.
Make sure you have switched over instead of using the old proprietary one.
I've been running debian stable (with backports) as my desktop for a couple of years now, I find that KDE is updated enough, and wayland is stable enough (on my hardware, of course, a 13 year old macbook and a 8 year old NUC).. honestly, as a simple user, i haven't appreciated any difference between X and wayland sessions, so i just login into wayland.
... um, okay, that's true, although in the last 10+ years it did not "rapidly" reach stability
We think that the Wayland session currently is the better choice for the majority of our users. It's more stable and polished, performs better on average, and has more features and enables more hardware.
That said there are still gaps in the experience, and non-latin input is one of them. In principle it's roughly on par witu X11, but it's still pretty crap, e.g. at the moment we give you a choice between having a virtual keyboard and having something like ibus active, when many users want both at the same time, and a lot of the non-latin setup stuff is still not part of the core product, which is very user-unfriendly.
The KDE Linux alpha in particular will definitely and unequivocably not serve you well as it currently doesn't ship any ibus/fcitx.
The good news is that this is something we are very actively working on in the background right now. We have an annual community-wide goal election process that made Input improvements one of our goals, and the topic has been all over our developer conference this year.
I'm currently stuck on Windows for some old school .NET work, but otherwise have been running Wayland on either arch or fedora for 8 or so years, no real problems specific to Wayland. With that said, I've also always had X to fall back to for the odd program that absolutely only worked in an X session. At this point, though, I don't even recall what they were (probably something that didn't like running under Swaywm because wlroots), so even that might not be an issue.
If it wasn't a default, it'd go back to barely being used.
Also the taskbar is just broken in general. It'll pull tons of apps behind the '...' button even though there's plenty of room on the taskbar and it'll also put fake apps that aren't actually open on the taskbar.
Also no vertical task bar. Come on Microsoft.
Yes, this was a while ago now. But just as now, people said then "all the bugs are fixed and missing features added"; all that really means is "we're in the long tail". I might've put up with it if not for the fact that there were 2ish major bugs that directly affected my main workflow (e.g. temporarily swapping to non-Latin text input).
Fractional scaling is fixed in Plasma 6 though. So, if you need that, it has been good for 1 year now.
Wayland is, by far, the best windowing system I've ever used. No dropped frames, ever. Its actually kind of uncanny, it feels like you're using an iPhone.
Even *Windows* and *macOS* struggle with this — just look at how messy *fractional scaling* is [link](https://devblogs.microsoft.com/oldnewthing/20221025-00/?p=10...).
And yet, *Linux/KDE* has been pushing GUI innovation for decades. Apple and Microsoft have copied so many KDE features it’s hard to keep track.
Bugs in the window manager or shell (both shipped by KDE) are somewhat more common, but even if they are crashes, due to X11 being better-designed for isolated faults they are easily recovered-from without loss of session.
But I'm pretty sure at least half of them actually do work under X11, it's just that some UI libraries refuse to use it on the grounds of "X11 is outdated, I won't support features even though it does".
(also, having played around with DPI stuff on Wayland, it's pretty broken there in practice)
No HDR or high DPI is an annoyance. Not supporting accessibility is real deal breaker. Especially for commercial settings where things like Americans with Disability Act compliance matters. And even more for me with my retinas slowly tearing apart and losing my eyesight: the entire waylands ecosystem is extremely inconsistent and buggy.
I guarantee you spend more time "configuring" linux than actually being "productive" with it.
All of those are productivity things
I suspect HDR support could be added if someone were to retrofit it like how VR support was added, but no one really wants to work on that.
Perhaps people ought to listen to the Xorg devs when they say X11 is broken and obsolete, and you should be using Wayland instead. Every single one of them says this.
This is incorrect. Alan Coopersmith does not share this view and he is a Xorg developer. Anyone repeating this propaganda is arguing from ignorance.
That said, I have used both X11 and Wayland. X11 does its job well in many applications and honestly, we would have been better off had Wayland just been a X11 extension. As for things being broken, I have encountered far more brokenness when using Wayland than when using X11 exclusively. Wayland has gotten better of late, especially in desktop applications, but I do not consider it a replacement in general.
Just recently, I built a display based on a CM4 at work that uses X11 and I can remotely view and interact with the screen using x11vnc, which is fantastic for remote development and debugging. That is a convenience I simply do not have with Wayland. I have tried to do steam remote play by streaming from a desktop to my Steam Deck. If the desktop is running a Wayland session, a pop up appears on the physical display asking if I want to permit remote access and I currently have no way of clicking it without being physically present. This ruins the remote play experience, which I want to use when I am physically not present. A X11 session does not have this problem. If a tool like x11vnc existed for Wayland, it would be useless if it does what Steam Remote Play does with Wayland. :/
what are you talking about?
R10G10B10 matches most HDR displays out there, AFAIK even Windows uses that outside of DirectX (where FP16 is used).
But beyond that...
> Fixing this would require rewriting the entire protocol.
...most of the protocol does not have to do with color values. X11 is extensible and an extension can be used that allows alternative functions that use more advanced color values where that'd make sense. For example, assuming you want to use "full" range color values for the drawing functions like XFillPolygon, etc, you'd want to add have the extended range state in graphics contexts, introduce extended commands for changing it (with the existing commands simulating an SDR color for backwards compatibility). That is assuming R10G10B10 is not enough of course (though because for decades many applications assumed 8bit RGB, it is a good idea to do sRGB/SDR simulation for existing APIs and clients regardless of the real underlying mode of the monitor unless a client either opts-in to using extended color or uses the new APIs).
Another thing to keep in mind is that these are really needed if you want to use the draw primitives using extended color / HDR. However most HDR output, at least currently, is either done using some other API (e.g. Vulkan) or via raw pixel data. In which case you need to configure the window output (a window region, to allow for apps with mixed color spaces in a single window - e.g. think Firefox showing a SDR page with an HDR image) to use a specific color space/format and then rely on other APIs for the actual pixel data.
This is something i wanted to look into for a while now, unfortunately other stuff always end up having more priority - and well, my "HDR" monitor is only HDR in name, it barely looks any different when i try to enable HDR mode in KDE Plasma under Wayland for example :-P. I do plan on getting an HDR OLED monitor at some point though and since i do not plan on changing my X11-based environment, i might take a look at it in the future.
Once again, every... last... one of the Xorg devs is of the opinion that you should be using Wayland instead. Even if you had changes to propose to Xorg, they will not make it into a release. If you insist on soldiering on with X, your best bet is probably to contribute to Wayback, likely to be the only supported X11 display server in the near future, and see if you can add a protocol to the compositor to allow "overlay" of an HDR image displayed using Wayland "on top of" an X window that wants to do HDR.
But really, consider switching to Wayland.
https://www.x.org/wiki/AlanCoopersmith/
You will not find one.
I use X11 features such as highlight to copy and then using middle mouse button and/or Shift-Insert to paste its contents (just to mention one), and I use xclip extensively to copy contents of files (and stdin) to it. I use scrot, I use many other applications specifically made for Xorg, and so forth. I have a custom Xorg config as well which may or may not work with Wayland.
Thus, I do not think I could realistically switch to Wayland.
I won't say anything against your other points (and in fact I am typing this comment on Xorg because I have my own list of reasons), but https://github.com/bugaevc/wl-clipboard is almost drop-in for xclip/xsel.
My comment isn't about how much work something would need, but about how it can be done.
> Once again, every... last... one of the Xorg devs is of the opinion that you should be using Wayland instead.
Good for them, but i have my own opinions.
> Even if you had changes to propose to Xorg, they will not make it into a release.
Maybe or maybe not. AFAICT the official stance has been that nobody wanted to work on these things, not that they are against it, they just do not want to do it themselves.
But if they do not make it into a release, there is also the XLibre fork or there might be other forks in the future, it isn't like Xorg is some sort of proprietary product. I'd rather stick with Xorg as it seems more professional but ultimately whatever works.
> see if you can add a protocol to the compositor to allow "overlay" of an HDR image displayed using Wayland "on top of" an X window that wants to do HDR.
TBH this sounds like an incredibly ugly and fragile hack. There are two main uses for HDR support: embedded HDR (e.g. in a firefox window) and fullscreen HDR (e.g. for videos/games). For the latter there is no point in an overlay, just give the server the full screen. For the former such an overlay will require awful workarounds when you want more than just a self-contained rectangle, e.g. you want it clipped (partially visible image) or need it to be mixed with the underlying contents (think of a non-square HDR shape that blends into SDR text beneath or wrapped around it).
From a pragmatic perspective the best approach would be to see how toolkits, etc, use HDR support in Wayland and implement something similar under X11/Xorg to make supporting both of them easy.
> But really, consider switching to Wayland.
I've been using Window Maker for decades and have no interest in something else. Honestly i think that adding Wayland support to Window Maker or making a Window Maker-like Wayland compositor are both more of an effort and harder than adding HDR support to Xorg. Also i am sometimes trying KDE Plasma Wayland for various things and i have several programs having small but annoying issues under Wayland.
That said, from a practical perspective, one can use both. The only use for HDR i can think of right now is games and videos and i can keep using my Xorg-based setup for everything while switching to another virtual terminal running KDE Plasma Wayland for games/videos that i want to see in HDR. Pressing Ctrl+Alt+Fn to switch virtual terminal isn't any different than pressing Win+n to switch virtual desktop.
X maintainers said it is a feature they do not want to implement. Because "we work on Wayland now, Wayland better".
https://www.x.org/wiki/AlanCoopersmith/
That said, he has not volunteered to implement HDR.
If you're about to tell me that XLibre is a viable alternative, no you're not because it isn't.
https://www.x.org/wiki/AlanCoopersmith/
Xorg will continue to exist even if Redhat pulls out, but Redhat needs it for XWayland indefinitely.
Reading this thread makes me want to try KDE/Wayland again, so probably on my next install I'll give it another shot. If it's still crap I think it's time to switch off of KDE.
That said, I am probably preaching to the choir, as I think we are both moderates as far as the X11 vs Wayland debate is concerned.
X11 has a workaround for that because I can use gamma correction to simulate brightness control and make it work with night light. There was no way to do it in Wayland: they stomp on each other and undo whatever the other software did. So I'm back to X11 and frankly I don't notice any difference.
If you have more luck with your graphic card you'll be probably OK with Wayland. Anyway the X11 session is there, logout from Wayland and login using X11.
Alternatively, it is theoretically possible to forward port their driver yourself since their kernel compatibility shims are open source and you can see what changes they made in newer versions to support newer kernels. This is likely a masochistic exercise however.
Tell me more, please.
Does it only have an nVidia or is it dual GPU and switching?
Because I have the latter and the lack of GPU drivers is keeping me on Ubuntu 22.04.
Is it possible you're just using the Intel GPU and your nVidia is inactive?
With Debian 11, kernel 5.10.0-35-amd64
I was sure that I was using the NVIDIA driver 390 but I run dpkg -l before replying to you and I found out that actually I'm running the 470.256.02 driver. I definitely run the NVIDIA card because NVIDIA X Server Settings is telling me that X Screen 0 is on "Quadro K1100M (GPU 0)". I see it also in /var/log/messages and
$ lspci -k | grep -A 3 VGA
01:00.0 VGA compatible controller: NVIDIA Corporation GK107GLM [Quadro K1100M] (rev a1)
DeviceName: 0
Subsystem: Hewlett-Packard Company ZBook 15
Kernel driver in use: nvidia
cpuinfo reports that my CPU is an i7-4700MQ CPU @ 2.40GHz which according to the Intel web site has an internal Intel® HD Graphics 4600. I think that I never used it. NVIDIA X Server Settings does not report it but it's a NVIDIA program so I would not be surprised if it does not see it. Anyway, the kernel module for Intel should be i915 and it's not there. Maybe I have to load it but I'm phasing out this version of the OS. I'm pretty sure I never installed anything to switch between the two GPUs. There used to be something called bumblebee. Is that what you are using now?Apparently I can install the 470 driver in Debian 13 https://forums.debian.net/viewtopic.php?t=163756 but it's from the unstable distribution and if Nouveau works I'm fine with that. I'm afraid that the NVIDIA driver and Wayland won't mix well even on 13 so I'll be on X11 anyway.
I use older Thinkpads with Optimus switching, so using the Intel GPU is not opotional: it is always on, but the OS offloads GPU-intensive stuff to the nVidia GPU.
In my testing with Debian 12, I could not get my nVidia chips recognised at all. In some distros, this has the side-effect of disabling the Displayport output, which is a deal-breaker as I use an external monitor and the machines do not have HDMI.
Jank and glitches. Jank and glitches.
And no, gnome's wayland compositor did not achieve it either. They threw away all accessibility support and then invented two new gnome-only protocols for it that no software except gnomes own compositor supports.
Cross that hurdle and I can go back to trusting the Linux Desktop for business things.
If you want a good, actually professional rolling release, use SUSE Tumbleweed. They test packages more thoroughly, and they actually hold back breaking or buggy changes instead of the "lol read log and get fucked" policy.
Arch is a DO IT YOURSELF distro. They write that thing everywhere they can. The stability of the installation is literally ON YOU! Your responsibility as a DO IT YOURSELF distro user. They didn't trick you into it or something.
Expecting Arch linux to spoon feed is like expecting IKEA to give you assembled furniture.
You should use openSUSE or other "managed" rolling release distros. Arch IS NOT A "managed" rolling release distro.
https://www.unsungnovelty.org/posts/01/2024/a-linux-distro-r...
It's software. It will work the way it is written. As simple as that.
I am new to Arch and would like the notifications that you are talking about.
I am currently on Arch specifically because Tumbleweed shipped a broken Firefox build and refused to ship the fixed version for a month.
As a workaround I uninstalled the bundled firefox and replaced it with flatpak. And on next system update the bundled Firefox was back because for some strange reason packages on suse are bundled.
We've had different experiences. I've been using Arch for about 8 years and have had to scour the forums no more than thrice to find the magic incantations to fix a broken package manager. In all cases, the system was saved without a reinstall. However, it is certainly painful when pacman breaks.
$ cat /etc/issue
Antergos Linux \r (\l)
;-)Sometimes once a month, sometimes once a week, sometimes more if there's a critical CVE.
It is a million times more sane to have a package manager throw a warning or an error when a breaking change is about to be applied, rather than just YOLO the breaking change and pray people read the release log.
It is one of the most stupid policies ever, and the main reason why I will steer everyone away from Arch forever. Once bitten, twice shy.
I do subscribe to the arch-announce mailing list which warns of breaking changes, but that receives around 10 messages per year, and the vast majority aren't actually all that important.
I've also gone multiple months between updates and didn't have any problems there either.
The idea that Arch Linux breaks all the time is just complete nonsense.
Here's one of the oldest versions of the "Arch-based distributions" page on the wiki. It has a notice at the top that says that forks are not supported by the community or developers: https://wiki.archlinux.org/index.php?title=Arch-based_distri...
The only (real but small) difference is between desktop environments and their choice of default apps (eg. file manager).
EDIT: wow, all the comments are like that. I guess something has to come first.
GNOME doesn’t maintain Ubuntu or Fedora, but it still dominates the Linux desktop experience.
It actually looks a lot what KDE is shipping here except Gnome provides it as a reference system for their developers at the moment but it’s totally usable as a user if you want to.
No, it does not, in any way whatsoever.
GNOME OS does not have dual root partitions, Btrfs with snapshots and rollback, atomic OS updates, or any of the other resilience features which are the unique selling points of KDE Linux.
In case you are unfamiliar with the design of KDE Linux, I have described it in some depth:
https://www.theregister.com/2025/08/04/kde_linux_prealpha/
And I compared it and GNOME OS here:
https://www.theregister.com/2024/11/29/kde_and_gnome_distros...
Hang on. I have to say [[citation needed]] on this.
I write about systemd regularly, and read Lennart's blog and Mastodon feed. As evidence, I did an in-depth on systemd 258 just a month or so ago:
https://www.theregister.com/2025/07/25/systemd_258_first_rc_...
I do not personally use GNOME or GNOME Boxes and I've never managed to get GNOME OS to so much as boot successfully in a hypervisor or on bare metal, and I've tried many times.
But I don't think it adopts all these fancy features yet.
ParticleOS does:
https://github.com/systemd/particleos
But that's a separate distro. It's not GNOME OS. It's the testbed for the "fitting everything together" concepts.
Adrian Vovk's CarbonOS did much of this:
... but it's dormant now. He wants to turn GNOME OS into something like that, as he has said:
https://blogs.gnome.org/adrianvovk/2024/10/25/a-desktop-for-...
And I have written about:
https://www.theregister.com/2024/11/29/kde_and_gnome_distros...
I am not aware it has happened yet, though.
[1]https://www.osnews.com/story/139696/gnome-os-is-switching-fr...
I will indeed have a look, ASAP -- but I hope this version is a little more tolerant of non-GNOME/non-RH hypervisors, or I won't get far...
You mean apart from the fact that they are both immutable OS allowing the use of flatpack for software distribution?
Because from where I stand they have a lot more in common than different.
> GNOME doesn’t maintain Ubuntu or Fedora
What differentiates GNOME from KDE in that regard (other than it'd be Kubuntu and the Fedora KDE spin from the other perspective)?
Adding on from this new comment: Given whatever differences you see for GNOME in the above, why do you think GNOME has maintained its own testing OS for the last 5 years despite this?
You put the things in quotation marks but I do not see these phrases in the thing to which you're commenting.
KDE is roughly a year older than GNOME.
Snag: KDE was built in C++ using the semi-proprietary (dual-licensed) Qt. Red Hat refused to bundle Qt. Instead, it was a primary sponsor of GNOME, written in plain old C not C++ and using the GIMP's Gtk instead of Qt.
This fostered the development of Mandrake: Red Hat Linux with built in KDE.
In the late 1990s and the noughties, KDE was the default desktop of most leading Linux distros: SUSE Linux Pro, Mandrake, Corel LinuxOS, Caldera OpenLinux, etc. Most of them cost money.
In 2003, Novell bought SUSE and GNOME developer Ximian and merged them, and SUSE started to become a GNOME distro.
Then in 2004 along came Ubuntu: an easy desktop distro that was entirely free of charge. It came with GNOME 2.
Around the same time, Red Hat discontinued its free Red Hat Linux and replaced it with the paid-for Red Hat Enterprise Linux and the free, unsupported Fedora Core. Fedora also used GNOME 2.
GNOME became the default desktop of most Linuxes. Ubuntu, SUSE, Fedora, RHEL, CentOS, Debian, even OpenSolaris, you got GNOME, possibly unless you asked for something else.
KDE became an alternative choice. It still is. A bunch of smaller community distros default to KDE, including PC LinuxOS, OpenMandriva, Mageia... but the bigger players all default to GNOME.
Many of the developers of GNOME still work for Red Hat today, over 25 years on. They are on the same teams as the developers of RHEL and Fedora. This is a good reason for GNOME OS to use a Fedora basis.
This is a common misconception. RHEL and RHL co-existed for a bit. The first two releases of RHEL (2.1 and 3) were based on RHL releases (7.2 and 9). What was going to be RHL 10 was rebranded and released as Fedora Core 1. Subsequent RHEL releases were then based on Fedora Core, and later Fedora.
https://docs.fedoraproject.org/en-US/quick-docs/fedora-and-r...
Sure, there was overlap. Lots of overlap. You highlight one. Novell bought SUSE, but that was after Cambridge Technology Partners (IIRC) bought Novell, and after that, then Attachmate bought the result...
But you skip over that.
I think as a compressed timeline summary, mine was fair enough.
It is really important historical contact that KDE is the reason that both Mandrake and GNOME exist, and it's rarely mentioned now. Mandrake became Mandriva then died, but the distros live on and PC LinuxOS in particular shows how things should have gone if there was less Not-Invented-Here Syndrome.
I don't think "well, actually, this happened before that" is as important, TBH.
No?
Quotes are overloaded in that they are used for more than direct citation. In this case: to separate the "phrase" from "the sentence talking about it" (aka mention distinction - as used here as well). "s are also seen in aliases, scare quotes, highlighting of jargon, separating internal monologue, and more. If it doesn't seem to be a citation it probably wasn't meant to be one. On HN, ">" seems to be the most common way to signal a literal citation of something said.
This is a fair enough, even more detailed, summary of the history, but I'm still at a loss for stitching this history to what KDE should be doing today. Similarly, for why this relationship results in good reasons for GNOME OS to exist but KDE Linux? E.g. are you saying KDE Linux should have been based on something like openSUSE (Plasma is the default there) instead of Arch, that they should have stuck to several more decades of not having a testing distro, or that they should do something completely different instead?
I don't use GNOME or KDE as my DE, so I genuinely don't know what GNOME might be doing that KDE should be doing instead (and vice versa) all that deeply. The history is good, but it's hard for me to weed out what should be applying from it today.
Or maybe I completely read to far into it and it was only a statement that GNOME has historically been more successful than KDE. It's known to happen to me :D.
Let me emphasise the executive summary:
1. KDE was first.
2. KDE used to enjoy significant corporate backing.
3. Because of some companies' actions, mergers and acquisitions, etc., other products gained ascendancy.
4. KDE is still widely used but no longer enjoys strong corporate backing.
5. Therefore KDE is going it alone and trying something technologically innovative with its showcase distro, because the existing distro vendors are not.
The KDE Linux section of this recent article of mine spells out my position more clearly:
https://www.theregister.com/2025/09/10/kde_linux_and_freebsd...
If yes, what are some good options for someone looking for a replacement to ChromeOS Flex on an old but decent laptop?
To add something useful, OSes are the one area where reinventing the wheel leads to a lot of innovation.
It's a complete strip down and an opportunity to change or do things that previously had a lot of friction due to the amount of change that would occur.
To me, it seems like the opposite is true. Operating systems feel like a solved problem. What are some of the big innovations of recent times?
Even desktop environment is not solved. I'm typing this from a relatively new metod of displaying windows - a scrolling window manager (e.g. Karousel [1] for KDE). It just piles new windows to the right and it infinitely scrolls horizontally. This seems like a minor feature but changes how you use the desktop entirely and required a lot of new features at operating system level to enable this. I wouldn't go back to a desktop without this.
The immutable systems like NixOS [2] have been an absolute game changer as well. Some parts are harder but having an ability to always roll back and the safety of immutability really make your professional environment so much easier to maintain and understand. No more secrets, not more "I set something for one project at system level and now years later I forgot and now something doesn't work".
I've been on linux desktop exclusively for almost 15 years now and it has never been as much fun as it is today!
I've long wanted a scrollable/zoomable desktop, with a minimap that shows the overall layout. Think the UI of an RTS game, where instead of units you move around and resize windows. This seems like something in that direction, at least.
How does Karousel work with full screen applications, e.g., games?
I would love to see a complete overhaul of those.
In my opinion, if I type "xeyes" and it works (the app shows on my screen), then I should be able to start any other X11 application. However, gnome-terminal behaves differently. I don't know why precisely, but using dbus-launch sometimes works. It is a very annoying issue. A modern Linux desktop system feels like it's microservices connected by duct-tape, and sometimes it works, and sometimes it doesn't.
As far as the actual OS, the new sheaves and barns thing in Linux is neat. We need innovation in RAM compression and swapping to handle bursty desktop memory needs better.
The main problem, and the one I'm trying to solve, is that as a software engineer, you have little incentive to make something that millions of people will use on the Linux desktop unless you have some other downstream monetization plan. You will have tons of users who have questions, not code contributions. To enable users to better organize into their own support structures and to make non-code contributions, I'm building PrizeForge.
Agreed, but...
> rewrite the kernel
Why would you do that? The kernel already has all the tools you need for isolating apps from each other. It's up to userspace to use these tools.
You can build a skyscraper on top of the foundations of a shed, and the kernel devs have done an amazing job at that, but at some point you gotta conclude that maybe it is better to start from scratch with a new design. And security is a good enough reason.
What I'm affraid is to start experimenting and finding more and more that my workflow is hindered either by some software not working because the architecture of the OS is incompatible, or by KDE UX design choices in the user interface.
That's not to say that it wouldn't be interesting, and it would say nothing about the quality of the software if I'd hit such walls, only that I'm not its target audience.
So I can really separate the system-level changes (in the image, version-controlled) from my user changes.
It's a nixos-like experience without using nix at all.
There have been a couple of things to have in mind, with my Bazzite installation, for creating users or adding groups for example, this pointed me to use systemd-sysusers but it was simple.
But for Bazzite (and other universal blue distros) you better use BlueBuild
In the end it's an OCI container image, so you could technically just have a Dockerfile with "FROM bazzite:whatever" at the top, but bluebuild automates the small stuff that you need to do on top of it, and allows you to split your config in files.
You can have a look at my repository to see how easy it is !
The nice thing about Fedora Silverblue's model is that it is literally a container image, so to "build" your image you can run any arbitrary commands, so it's way simpler than nix.
No, you can't.
If you want Arch but with snapshots and rollback, Garuda Linux does that by default. It is not immutable, though.
Atomic updates are not the same thing as backups.
Backups: my file is gone, overwritten, corrupted, I accidentally deleted contents I want... but my computer is working, so I will retrieve a copy from my backup.
Atomic updates: aargh, my computer was half way through installing 150MB of updates across 42 packages, but one file was bad and the update failed, so I rebooted and now it won't boot! No problem, reboot to the boot menu, choose the previous snapshot, a known-good config, and you can boot up and get back to work, until the update is available again.
Funny; sounds more like a BSD (a prebuilt single-artifact Arch "base system" + KDE Builder-based "ports collection") than a Linux.
> KDE Linux is an immutable distribution that uses Arch Linux packages as its base, but Graham notes that it is "definitely not an 'Arch-based distro!'" Pacman is not included, and Arch is used only for the base operating system. Everything else, he said, is either compiled from source using KDE Builder or installed using Flatpak.
This is where I've been for the last 7 years. Very happy with it. I'm looking forward to an Arc Pro machine with SR-IOV GPU capability for VMs. That is pretty much my dream desktop, as much as I care to have one.
I think the concept has promise (see: ChromeOS) but the execution today is still way too rough.
That said, I don't think having yet another immutable distro is a great idea if they are only going to punt and use Flatpaks. They can run flatpaks on any distro out there. So not really understanding the idea behind this. Nothing really stands out from the article - they still need to make KDE work great with most other modern versions of the distros so it isn't like Flatpaks based KDE is going to give them an edge in having the best KDE on their own distro.
What am I missing?
This distro doesn't seem to be born out of some real need for non-KDE-developers? Maybe it should be just some playground for KDE devs to test drive new tech?
It's born out of a few things:
a) KDE as a community has increasingly focused on good and direct relations to end-users of late, which e.g. has resulted in most of the funding now coming from individual donors. Wanting to make more of their computer-using experience better isn't a strange impulse to have.
b) The community has hardware partners (e.g. laptop makers) that want to collaborate on pre-installing something with a KDE-focused out of the box and user experience. That has so far been Neon, which has a number of engineering and stability issues that have been difficult to overcome. KDE Linux is an attempt to improve on that.
c) It's also generally driven by a lot of the lessons learned from the SteamOS and Neon projects, and is attempting a lot of new solutions to risk-free updates and hackability, oob experience, and down the road likely also backups. The team does think there is a value prop to the distro as such beyond the KDE GUI.
d) The developer audience isn't unimportant either. More KDE developers on an immutable+sandboxed apps distro will mean more eyeballs on e.g. Flatpak problems, improving that for everyone else. Many recent new distros that ship Plasma by default (e.g. SteamOS, Bazzite, CachyOS, etc.) benefit.
a) I get that a lot of users use KDE. And they love the Desktop Environment. But is there demand for an OS? Would those users switch? I hope so, but for such a big decision to build, support, maintain a whole OS, i'd expect some kind of poll maybe? Some input saying "30% of KDE users would switch to KDE OS"? Is there some kind of proof? I've been using Gnome OS for years but never felt i would want to switch to some Gnome OS. The Desktop Environment is one of many tools in my distro (for me, at least).
b) Supporting lots of hardware (expecially Laptops!) seems to be a huge time sink for people not primarly involved in kernel/driver stuff, or not?
c) ok..
d) Same as a): Will all KDE devs use KDE OS? And is it good to have the KDE Devs use KDE OS, when the majority of users use Arch/Debian/Ubuntu/Fedora? I'd rather have a good chunk of those devs use my distro...
But then, since / is rw and only /usr is read-only, it should be possible to install additional kernel modules, just not ones that live in /usr - unless /lib is symlinked to /usr/lib, as happens in a lot of distros these days.
Well, as long as they're either updating frequently or you're not using nvidia drivers (which are notoriously unpleasant with Wayland) I guess it's fine for a lot of people.
Asking this as a user who really would love to move away from X11, but everytime I try anything Wayland related it's just alpha or pre-alpha, endless graphics glitches, windows going black or flickering, (double the glitches after turning display off/on),multiple rendering issues with Firefox, Clion etc..
I think I'm mentally preparing to use X11 until retirement....
The thing is the first 90% of software is the easy part. Once you've done that you still need to do the other 90%. And the latter 90% is what separates little hobbyist weekend projects from products. It's a relentless boring grind of testing, fixing bugs and sharp edges and adding workarounds.
Using NVIDIA proprietary.. glitching like MOFO. Looks slick but just way too buggy to be used.
Some things to try:
* Try turning your display on/off
* Try using several virtual displays and spread graphics apps on each one (I use 4 normally)
* Try opening 20 firefox windows with ~50 tabs each
* Try opening a 8k png in firefox tab (or in some other image viewer)
So yeah... pre-alpha.P.s. I also tried XFCE and Enlightenment.. and those are not any better (not that claim to be anything but pre-alpha).
Honestly.. on Windows11 the experience is just so damn smooth and slick. Nothing glitches or hangs. The Linux graphics stack just lags behind decade after decade... never catches up...
Ah, I haven't tried it on NVIDIA drivers in a while.
I'm doing a reinstall on my gaming PC soon, so I'll give it a shot then. I've been using it on Intel and AMD systems, and haven't had issues. But you know, they actually have drivers that are designed for the modern linux graphics stack.
> P.s. I also tried XFCE and Enlightenment.. and those are not any better (not that claim to be anything but pre-alpha).
So... maybe the NVIDIA drivers then? And not KDE Plasma?
> The Linux graphics stack just lags behind decade after decade... never catches up...
Come on, you can't really blame NVIDIA's dogshit drivers that refuse to integrate into the rest of the stack on the KDE devs.
Yeah, well the reality is that NVIDIA drivers are the drivers one wants to use on NVIDIA hardware (which many of us have.
And somehow they work fine on X11.
It's always nice to blame the driver vendor, but what has the Linux community the kernel team, the graphics team done to promote Linux and make it simple to write correct performant drivers for the platform? How many graphics memory allocations are there? How many buffer sharing APIs, are the kernel driver interfaces stable?
The main differences are related to packages. The package format (.deb, .rpm, etc), the package manager (dpkg/apt, pacman, dnf, etc), how frequently the packages are updated, if they focus on stability or new features, etc.
New Linux users that are used to Windows or Mac sometimes dislike a distro and like other, but actually what they really disliked what the desktop environment. For example, Kubuntu uses KDE Plasma as its desktop environment and its user experience are almost the same as Fedora KDE, Manjaro KDE, OpenSuSE and so on, while it's very different to the default Ubuntu (that uses GNOME). But, under the hood, Ubuntu and Kubuntu are the same (you can even uninstall KDE and install GNOME).
Actually, other Unix-based systems can install the same desktop environments that we have on Linux, so, if you have a FreeBSD with KDE you won't even notice the difference to Kubuntu at first, even though it's a completely different operating system.
tl;dr: there's a real difference, but from a user perspective it's mostly under the hood, not exactly in usability.
Fedora atomic kde is close to perfect. Where is the need to reinvent the wheel?
According to kde.org/linux it comes with Flatpak and Snap. Distrobox and Toolbox. They don't seem to just pick a lane to be consistent, it's all kind of random.
KDE and Gnome are footing Flathub together and a lot of the community effort goes into Flatpak packaging.
There's no package manager and you can't install, remove, or upgrade packages.
You get whole-OS image updates from the distributor, just like iOS or Android.
The original idea of shared libraries was that a computer system can save time and memory because they only need to be loaded once.
Is that idea dead?
Then we can throw out all these fancy packaging tools like Snap and Flatpak, all the fancy half-done COW filesystems like Btrfs, all the fancy hidden-delta-synching stuff like OStree, and just ship one honking great binary in a single file that works on anything, no matter what the libc so it even works on musl or whatever.
Ha ha, only serious.
Most distros could be NixOS overlays. Don't like satan's javascript? Try Guix. Bottom line, the farther I get away from binaries discovering their dependencies at runtime, the happier I am.
Maintaining distros that are not some kind of overlay that can track the underlying base automatically is just asking for more maintenance than people will want to do while also Balkanizing options for users because while overlays can be composed, distro hopping very much does not compose.
Nothing else compares. Why reinvent the wheel?
"KDE Linux is an “immutable base OS” Linux distro created using Arch Linux packages, but it should not be considered an “Arch-based distro”; Arch is simply a means to an end, and KDE Linux doesn’t even ship with the pacman package manager."
> KDE Linux is an immutable distribution that uses Arch Linux packages as its base, but Graham notes that it is "definitely not an 'Arch-based distro!'"
Definitely not, indeed.
The only pain point I really found even developing for KDE on Debian was the the switch from qt 5 to 6 but that is always a risk and you can just compile qt from src.
Another pain point is their dev package manager doesn’t have a way to conveniently target library/package branches. So you can spend a fair amount of time waiting for builds to fail and passing in the library or package version to the config file. Very tedious and no doubt cost me lots of time when trying to build on top of Akonadi for example.
Latest as in "lagging for weeks while people in Ubuntu eat the bugs".
I also never said “latest” packages. That is some heavy lifting done by you.
Doesn't sound too bad for work.
> Neon has ""served admirably for a decade"", he said, but it ""has somewhat reached its limit in terms of what we can do with it"" because of its Ubuntu base. According to the wiki page, neon's Ubuntu LTS base is built on old technology and requires ""a lot of packaging busywork"". It also becomes less stable as time goes on, ""because it needs to be tinkered with to get Plasma to build on it, breaking the LTS promise"".
Mind you even though I've been running Linux for decades, I have lost the enthusiasm for the low level details and am happiest when I can use apt for everything and have the OS manage dependencies and updates. I see a lot of negative comments about Flatpack and my experiences haven't been great, so I don't know if it is comprehensively good and will solve issues like low level drivers (GPUs).
But hey, more power to them.
And of course the distros end up sharing the gross of the application packages - originally a differentiator between the classic distros - via e.g. Flatpak/Flathub.
One reason we're doing KDE Linux is that if you look at the growth opportunities KDE has had in recent years, a lot of that has come from our hardware partners, e.g. Slimbook, Tuxedo, Framework and others. They've generally shipped KDE Neon, which is Ubuntu-based but has a few real engineering and stability challenges that have been difficult to overcome. KDE Linux is partly a lessons-learned project about how to do an OEM offering correctly (with some of the lessons coming out of the SteamOS effort, which also ships Plasma), and is also pushing along the development of various out-of-the-box experience components, e.g. the post-first-boot setup experience and things like that.
And Kalpa is that just with Plasma as DE.
/s
Distributions are literally the worst thing about Linux - and by worst I really mean it in a way that is filled with the most amount of disgust and hate possible, like one feels toward a literal or social parasite.
Linux distros provide little to no value (after all these people just package software), they are just vehicles for petty losers to build their own fiefdoms, where they can be rules. They are (and the people who run them) acid on the soul, they poison the spirit of openness and sharing, by controlling who gets to use what
There existence was always political and the power they wielded over who gets to use and see your software was stomach-churningly disproportional to the value they provided.
Much like petty internet forums with pathethic power tripping mods, a given linux distro's maintainers get to decide that you, the dear programmer, the actual creator of value, gets to have his work judged, and his right to deliver his software to users by a distro maintainer a petty tyrant who might not have the time or might have some weird mental hangup about shipping your software. And even if they do, they might fuck up your package and the distro-crafted bugs will reflect badly on you.
I can shit on Microsoft and Apple all I want and it'll never impede my ability to deliver software to my users.
This is why open source failed on the desktop, and why we have three orders of magnitude more open-source zealots, and ignorant believers than actual programmers who work on useful stuff.
Why no one with actual self-respect actually builds software for the Linux desktop out of their own free will, and why garbage dumps and bugs and missing features persist for decades.
Imagine the humiliating process it takes for a dev to ship a package on Linux - first you have to parlay with maintainers to actually include your stuff. Then they add a version that's at best half-year out of date to jive with their release cadence. You're forced to use their vendored and patched libraries which are made bespoke for their use cases, and get patched for the 5 apps that they care about, and can break your stuff at a drop of a hat.
And no, you can't ship your own versions, because they'll insta reject your package.
This is literal Windows 98 dll hell, but Microsoft was at least a for-profit company you could complain to and they actually had a financial stake in making sure users software worked. Not so with Linux distros, they just wanna be in charge and tell everyone what they get to use.
Then you have
First, Ubuntu and snap should burn in hell. Much like their other efforts, they made an universal system that's hated by everyone and used by no one except for them and they keep pushing it with their trademark dishonest tactics copied from other dishonest vendors, like even if you get rid of the excrement that is snap, they keep reinstalling it via updates.
Flatpak was meant to work like a reasonable package manager would - you assume a stable OS base and demand and provide that, full stop. This is how Windows and Mac OS worked forever, and it doesn't even occur to devs that people using these OSes will have trouble running their software.
So essentially people are abandoning the memory/speed efficiency of the .so ecosystem, and seeking exe/msi style convenience... You know... a dump of legacy dll/static-so-snapshot versions with endless CVEs no one will ever be able to completely fix or verify.
Should be fun, and the popcorn is waiting =3
They also gain substantial amount of security by being sandboxed by default unlike majority of native packages.
An outdated old package library relies on people understanding/tracking the complete OS scope of dependencies, and that is infeasible for a small team.
If someone wants in... they will get in eventually... but faster on a NERF'd Arch install. =3
That is exactly the strong point of flatpaks. It's a lot easier to use toggle in a GUI for permissions than write whole new profiles. Not to mention that many even disable selinux because it is difficult.
>An outdated old package library relies on people understanding/tracking the complete OS
It takes 0 understanding to copy paste a outdated package warning and report that to the repo listed in flathub. It explicitly tells you as much.
But thanks for trying to post actual relevant data on the topic. =3
"Popcorn Music Video" (The Muppets)
1. Current release applications on deprecated OS (Mostly good)
2. Deprecated applications on current OS (Mostly bad)
The Windows style packaging architecture introduces more problems than it solves. Fine for running something like Steam games with single shot application instances using 95% of system resources each power cycle, but folks could also just stick with Windows 11 if convenience and security-theater is their preference.
Some people probably won't notice the issues, but depends what they do. Arch Linux itself is a pretty awesome distro for lean systems. =3
Source? There is no measurable energy or efficiency difference at least for flatpak on any semi recent hardware. I know that snaps do take couple seconds longer at first start.
I prefer flatpaks for proprietary and internet facing applications because of there easy sandboxing capabilities. There is also the advantage on archlinux not needing to do a full system update for a single application.
https://tldp.org/HOWTO/Program-Library-HOWTO/shared-librarie...
Getting into why the community argued for years while Debian brought up deb version controlled packaging is a long dramatic conversation. Some people liked their tar ball mystery binaries, and the .so library trend started more as a contest to see how much people could squeeze out of a resource constrained machine.
In a single unique application running context, the power of a cached .so reference count are less relevant. As a program built with .so may re-use many resources other programs or itself likely already loaded.
> ldd --verbose /usr/bin/bash
> ldd --verbose /usr/bin/cat
Containerization or sand-boxing is practically meaningless when punching holes for GPU, Network, media and HMI devices. Best of luck =3
Many applications don't need these permissions and even the ones that do will be much more secure than having full user space access by default.
Someone could exploit the system to gain more access vs someone does not need to do anything because they have full access by default. It's like arguing you don't need a root password because sudo is insecure anyway.
Qubes, Gentoo, and FreeBSD are all a better place to start if you are interested in this sort of research. Best of luck =3
Some programs take a huge memory and performance hit on non-Linux machines. =3
You're implying without stating it (or providing any evidence) that programs perform worse when statically linked than when assembled out of ELF DSOs, even when each of those DSOs has a single user.
That makes no technical sense. Perhaps you meant to make a different point?
A 34MB static built version will cost that amount of i/o every single instance on a system that did not cache that specific program previously. Also it will take up that full amount of ram while loaded every single time it runs.
Inefficient design, but works fine for other less performant OS =3
I've forgotten how to count that low.
Also, static programs are demand paged like anything else. Files aren't loaded as monoliths in either case. Plus, static linking enables better dead code elimination and devirtualization than is possible with an AOT-compiled and dynamically linked setup, which usually more than makes up for the text segments of shared dependencies having been pre-loaded.
I'm not sure you have enough technical depth to make confident assertions about linking and loading performance.
> =3
The "blowing smoke" emoticon isn't helping your argument.
If .so reuse is low, or the code is terrible... it won't matter much. Best of luck =3
Absolutely insane suggestion.
Meanwhile there are issues that haven't been solved for months; the latest Plasma version has barely any decent themes (the online community theme submissions seem to be wrought with spam), Discover is not really useful, needs curation, settings and configuration is everywhere to be found which is great for the average power-user, but hard to know what you can tweak without being overwhelmed. Flatpak is great, but really needs improving, more TLC and work towards cleaning up. It's looking more and more like the Android App Store every day.
KDE needs to stop trying to be everything to everyone and start getting a little more opinionated. I'd rather have a few well maintained components of a DE than many components that are no better than barely polished turds.
In any case, it's my favorite DE and each/every KDE developers are absolute legends in my mind.
A lot of the manpower working on this previously worked on KDE Neon, so it's perhaps better to think of it as a lessons-learned project that doesn't in fact do what you worry about (but it has already attracted new contribitors that also improve things elsewhere).
KDE also does serve users (and hardware partners) with Neon that deserve improvement.
There's also the fact that increasingly new users experience KDE software e.g. as Flatpaks on new distros that ship Plaama by default, e.g. Bazzite and CachyOS, and it makes sense to get more developer eyeballs on this to make sure it's a good experience.
That said, Android is pretty stable, because a given Android distro typically only targets a small hardware subset. But I don't think that's the kind of Linux distro that most people contributing to FOSS want to work on.
That being said, I still think Microsoft should have developed a seamless virtualization layer by now. Programs prior to X year are run in a microVM/WINE-like environment. Some escape hatch to kill off some cruft.