I wonder how strictly they interpret behavior here given the architectural divergence?
As an example, focus-stealing prevention. In xfwm4 (and x11 generally), this requires complex heuristics and timestamp checks because x11 clients are powerful and can aggressively grab focus. In wayland, the compositor is the sole arbiter of focus, hence clients can't steal it, they can only request it via xdg-activation. Porting the legacy x11 logic involves the challenge of actually designing a new policy that feels like the old heuristic but operates on wayland's strict authority model.
This leads to my main curiosity regarding the raw responsiveness of xfce. On potato hardware, xfwm4 often feels snappy because it can run as a distinct stacking window manager with the compositor disabled. Wayland, by definition forces compositing. While I am not concerned about rust vs C latency (since smithay compiles to machine code without a GC), I am curious about the mandatory compositing overhead. Can the compositor replicate the input-to-pixel latency of uncomposited x11 on low-end devices or is that a class of performance we just have to sacrifice for the frame-perfect rendering of wayland?
> I wonder how strictly they interpret behavior here given the architectural divergence?
It's right there in the rest of the sentence (that you didn't quote all of): "... or as much as possible considering the differences between X11 and Wayland."
I'll do my best. It won't be exactly the same, of course, but it will be as close as I can get it.
> As an example, focus-stealing prevention.
Focus stealing prevention is a place where I think xfwl4 could be at an advantage over xfwm4. Xfwm4 does a great job at focus-stealing prevention, but it has to work on a bunch of heuristics, and sometimes it just does the wrong thing, and there's not much we can do about it. Wayland's model plus xdg-activation should at least make the focus-or-don't-focus decision much more consistent.
> I am curious about the mandatory compositing overhead. Can the compositor replicate the input-to-pixel latency of uncomposited x11 on low-end devices or is that a class of performance we just have to sacrifice for the frame-perfect rendering of wayland?
I'm not sure yet, but I suspect your fears are well-founded here. On modern (and even not-so-modern) hardware, even low-end GPUs should be fine with all this (on my four-year-old laptop with Intel graphics, I can't tell the difference performance-wise with xfwm4's compositor on or off). But I know people run Xfce/X11 on very-not-modern hardware, and those people may unfortunately be left behind. But we'll see.
The compositing tax is just waiting for vsync; unless your machine is, like, a Pentium Classic, compositing itself isn't a problem.
Naturally these kinds of having a language island create some attrition regarding build tooling, integration with existing ecosystem and who is able to contribute to what.
So lets see how it evolves, even with my C bashing, I was a much happier XFCE user than with GNOME and GJS all over the place.
It is not the performance bottleneck people seem to believe.
Implementation matters, including proper use of JIT/AOT toolchains.
That's the easiest way you can win any argument on gnome. You're going straight for the nuclear option.
I think this is ultimately correct. The compositor will have to render a frame at some point after the VBlank signal, and it will need to render with it the buffers on-screen as of that point, which will be from whatever was last rendered to them.
This can be somewhat alleviated, though. Both KDE and GNOME have been getting progressively more aggressive about "unredirecting" surfaces into hardware accelerated DRM planes in more circumstances. In this situation, the unredirected planes will not suffer compositing latency, as their buffers will be scanned out by the GPU at scanout time with the rest of the composited result. In modern Wayland, this is accomplished via both underlays and overlays.
There is also a slight penalty to the latency of mouse cursor movement that is imparted by using atomic DRM commits. Since using atomic DRM is very common in modern Wayland, it is normal for the cursor to have at least a fraction of a frame of added latency (depending on many factors.)
I'm of two minds about this. One, obviously it's sad. The old hardware worked perfectly and never had latency issues like this. Could it be possible to implement Wayland without full compositing? Maybe, actually. But I don't expect anyone to try, because let's face it, people have simply accepted that we now live with slightly more latency on the desktop. But then again, "old" hardware is now hardware that can more often than not, handle high refresh rates pretty well on desktop. An on-average increase of half a frame of latency is pretty bad with 60 Hz: it's, what, 8.3ms? But half a frame at 144 Hz is much less at somewhere around 3.5ms of added latency, which I think is more acceptable. Combined with aggressive underlay/overlay usage and dynamic triple buffering, I think this makes the compositing experience an acceptable tradeoff.
What about computers that really can't handle something like 144 Hz or higher output? Well, tough call. I mean, I have some fairly old computers that can definitely handle at least 100 Hz very well on desktop. I'm talking Pentium 4 machines with old GeForce cards. Linux is certainly happy to go older (though the baseline has been inching up there; I think you need at least Pentium now?) but I do think there is a point where you cross a line where asking for things to work well is just too much. At that point, it's not a matter of asking developers to not waste resources for no reason, but asking them to optimize not just for reasonably recent machines but also to optimize for machines from 30 years ago. At a certain point it does feel like we have to let it go, not because the computers are necessarily completely obsolete, but because the range of machines to support is too wide.
Obviously, though, simply going for higher refresh rates can't fix everything. Plenty of laptops have screens that can't go above 60 Hz, and they are forever stuck with a few extra milliseconds of latency when using a compositor. It is unideal, but what are you going to do? Compositors offer many advantages, it seems straightforward to design for a future where they are always on.
I’m always a little bewildered by frame rate discussions. Yes, I understand that more is better, but for non-gaming apps (e.g. “productivity” apps), do we really need much more than 60 Hz? Yes, you can get smoother fast scrolling with higher frame rate at 120 Hz or more, but how many people were complaining about that over the last decade?
60Hz is actually a downgrade from what people were used to. Sure, games and such struggled to get that kind of performance, but CRT screens did 75Hz/85Hz/100Hz quite well (perhaps at lower resolutions, because full-res 1200p sometimes made text difficult to read on a 21 inch CRT, with little benefit from the added smoothness as CRTs have a natural fuzzy edge around their straight lines anyway).
There's nothing about programming or word processing that requires more than maybe 5 or 6 fps (very few people type more than 300 characters per minute anyway) but I feel much better working on a 60 fps screen than I do a 30 fps one.
Everyone has different preferences, though. You can extend your laptop's battery life by quite a bit by reducing the refresh rate to 30Hz. If you're someone who doesn't really mind the frame rate of their computer, it may be worth trying!
When rendering a full frame at once and then displaying it, a modern screen is not only able to be more consistent in timing, it might be able to display the full frame faster than a CRT. Let's say 60Hz, and the frame is rendered just in time to start displaying. A CRT will take 16 milliseconds to do scanout. But if you get a screen that supports Quick Frame Transport, it might send over the frame data in only 3 milliseconds, and have the entire thing displayed by millisecond 4.
Even phones have moved in this direction, and it's immediately noticable when using it for the first time.
I'm now on 240hz and the effect is very diminished, especially outside of gaming. But even then I notice it, although stepping down to 144 isn't the worst. 60, though, feels like ice on your teeth.
Quite a few. These articles tend to make the rounds when it comes up: https://danluu.com/input-lag/ https://lwn.net/Articles/751763/ Perception varies from person to person, but going from my 144hz monitor to my old 60hz work laptop is so noticeable to me that I switched it from a composited wayland DE to an X11 WM.
I dunno. It does seem a bit odd, because who was thinking about the framerates of, like, desktops running productivity software, for the last couple decades? I guess I assumed this would never be a problem.
There are two things that typically impact mouse cursor latency, especially with regards to Wayland:
- Software-rendering, which is sometimes used if hardware cursors are unavailable or buggy for driver/GPU reasons. In this case the cursor will be rendered onto the composited desktop frame and thus suffer compositor latency, which is tied to refresh rate.
- Atomic DRM commits. Using atomic DRM commits, even the hardware-rendered cursors can suffer additional latency. In this case, the added latency is not necessarily tied to frame times or refresh rates. Instead, its tied to when during the refresh cycle the atomic commit is sent; specifically, how close to the deadline. I think in most cases we're talking a couple milliseconds of latency. It has been measured before, but I cannot find the source.
Wayland compositors tend to use atomic DRM commits, hence a slightly more laggy mouse cursor. I honestly couldn't tell you if there is a specific reason why they must use atomic DRM, because I don't have knowledge that runs that deep, only that they seem to.
However, I do think that high refresh rates feel very nice to use even if they are not strictly necessary. I consider it a nice luxury.
A new display is usually going to be cheaper than a new computer. Displays which can actually deliver 240 Hz refresh rates can be had for under $200 on the lower end, whereas you can find 180 Hz displays for under $100, brand new. It's cheap enough that I don't think it's even terribly common to buy/sell the lower end ones second-hand.
For laptops, well, there is no great solution there; older laptops with 60 Hz panels are stuck with worse latency when using a compositor.
They aren't as common now, but when making a list of screens to replace my current one, I am limiting myself to IPS panels and quite a few of the modern options are still 60hz.
Of course, this isn't a huge deal to me. The additional latency is not an unusable nightmare. I'm just saying that if you are particularly latency sensitive, it's something that you can affordably mitigate even when using a compositor. I think most people have been totally fine eating the compositor latency at 60 Hz.
I think I know what "frame perfect" means, and I'm pretty sure that you've been able to get that for ages on X11... at least with AMD/ATi hardware. Enable (or have your distro enable) the TearFree option, and there you go.
I read somewhere that TearFree is triple buffering, so -if true- it's my (perhaps mistaken) understanding that this adds a frame of latency.
True triple buffering doesn't add one frame of latency, but since it enforces only whole frames be sent to the display instead of tearing, it can cause partial frames of latency. (It's hard to come up with a well-defined measure of frame latency when tearing is allowed.)
But there have been many systems that abused the term "triple buffering" to refer to a three-frame queue, which always does add unnecessary latency, making it almost always the wrong choice for interactive systems.
I ask because just a few minutes ago, I ran VRRTest [0] on my dual-monitor machine and saw no screen tearing on either monitor. Because VRR is disabled in multi-monitor setups, I saw juddering on both monitors when I commanded VRRTest render rates that weren't a multiple of the monitor's refresh rate, but no tearing at all.
My setup:
* Both monitors hooked up via DisplayPort
* Radeon 9070 (non-XT)
* Gentoo Linux, running almost all ~amd64 packages.
* x11-base/xorg-server-21.1.20
* x11-drivers/xf86-video-amdgpu-25.0.0-r1
* x11-drivers/xf86-video-ati-22.0.0
* sys-kernel/gentoo-sources-6.18.5
* KDE and Plasma packages are either version 6.22.0 or 6.5.5. I CBA to get a complete list, as there are so many relevant packages.
Yeah. I'm actually quite interested in hearing what "workarounds" and/or misbehavior you're talking about. 'amdgpu(4)' says this about the TearFree property:
Option "TearFree" "boolean"
Set the default value of the per-output ’TearFree’ property,
which controls tearing prevention using the hardware page flip‐
ping mechanism. TearFree is on for any CRTC associated with one
or more outputs with TearFree on. Two separate scanout buffers
need to be allocated for each CRTC with TearFree on. If this op‐
tion is set, the default value of the property is ’on’ or ’off’
accordingly. If this option isn’t set, the default value of the
property is auto, which means that TearFree is on for rotated
outputs, outputs with RandR transforms applied, for RandR 1.4
secondary outputs, and if ’VariableRefresh’ is enabled, otherwise
it’s off.
The explicit mention that the "auto" enables TearFree only for secondary outputs and rotated and/or transformed outputs if 'VariableRefresh' is disabled seems to directly contradict what I think you're saying. And if "auto" enables TearFree on secondary displays, my recommendation of "on" certainly also does. But, yeah. I await clarification.well, the answer is just no, wayland has been consistently slower than X11 and nothing running on top can't really go around that
It's specifically about cursor lag, but I think that's because it's more difficult to experimentally measure app rendering latency.
Wayland is a specification, it has an inability to be "faster" than other options. That's like saying JSON is 5% slower than Word.
And as for the implementations being slower than X, that also doesn't reflect reality.
https://gitlab.xfce.org/xfce/xfwm4/-/blob/master/settings-di...
Personally, I'm a big proponent of Wayland and not big Rust detractor, so I don't see any problem with this. I do, however, wonder how many long-time XFCE fans and the folks who donated the money funding this will feel about it. To me the reasoning is solid: Wayland appears to be the future, and Rust is a good way to help avoid many compositor crashes, which are a more severe issue in Wayland (though it doesn't necessarily need to be fatal, FWIW.) Still I perceive a lot of XFCE's userbase to be more "traditional" and conservative about technologies, and likely to be skeptical of both Wayland and Rust, seeing them as complex, bloated, and unnecessary.
Of course, if they made the right choice, it should be apparent in relatively short order, so I wish them luck.
Very long time (since 2007) XFCE user here. I don't think this is accurate. We want things to "just work" and not change for no good reason. Literally no user cares what language a project is implemented in, unless they are bored and enjoy arguing about random junk on some web forum. Wayland has the momentum behind it, and while there will be some justified grumbling because change is always annoying, the transition will happen and will be fairly painless as native support for it continues to grow. The X11 diehards will go the way of the SysV-init diehards; some weird minority that likes to scream about the good old days on web forums but really no one cares about.
There are good reasons to switch to Wayland, and I trust the XFCE team to handle the transition well. Great news from the XFCE team here, I'm excited for them to pull this off.
> The X11 diehards will go the way of the SysV-init diehard
I hope you are not conflating anti-systemD people with SysV init diehards? As far as I can see very few people want to keep Sysv init, but there are lots who think SystemD init is the wrong replacement, and those primarily because its a lot more than an init system.
In many ways the objects are opposite. People hate system D for being more than init, people hate Wayland for doing less than X.
Edit: corrected "Wayland" to "XFCE" in first sentence!
Systemd is creating the same kind of monolith monoculture that Xorg represented. Wayland is far more modular.
Regardless of your engineering preferences, rejecting change is the main reason to object to both.
Not sure I agree here, assuming you mean "... than X11". With Wayland, you put your display code, input-handling code, compositor code, session-handling code, and window-management code all in the same process. (Though there is a Wayland protocol being worked on to allow moving the WM bits out-of-process.)
With X11, display and input-handling are in the X server, and all those other functions can be in other processes, communicating over standard interfaces.
That's an implementation detail. You can absolutely separate one out from the other and do IPC - it just doesn't make much sense to do so for most of these.
The only one where I see it making sense is the window manager, which can simply be an extension/plugin either in a scripting language or in wasm or whatever.
I do dislike System D for two reasons. One is exactly because it s a monolith and, in effect, an extension of the OS. The other is the attitude of the developers which becomes very evident if you browser the issues.
And in fact has been: https://github.com/wayland-transpositor/wprs
Our OpenBSD packager has already said in our Matrix channel that he'll be testing here and there in order to keep me honest ;)
It would have been much easier and cost-effective to use wlroots, which has a solid base and has ironed out a lot of problems. On the other hand, Cosmic devs are actively working on it, and I can see it getting better gradually, so you get some indirect manpower for free.
I applaud the choice to not make another core Wayland implementation. We now have Gnome, Plasma, wlroots, weston, and smithay as completely separate entities. Dealing with low-level graphics is an extremely difficult topic, and every implementor encounters the same problems and has to come up with independent solutions. There's so much duplicated effort. I don't think people getting into it realize how deceptively complex and how many edge-cases low-level graphics entails.
> using smithay, the brand new Rust/Wayland library
Fun fact: smithay is older than wlroots, if you go by commit history (January 2017 vs. April 2017).
> It would have been much easier and cost-effective to use wlroots
As a 25+ year C developer, and a ~7-year Rust developer, I am very confident that any boost I'd get from using wlroots over smithay would be more than negated by debugging memory management and ownership issues. And while wlroots is more batteries-included than smithay, already I'm finding that not to be much of a problem, given that I decided to base xfwl4 on smithay's example compositor, and not write one completely from scratch.
I have done it and it left a bad taste in my mouth. Once you're doing interop with C you're just writing C with Rust syntax topped off with a big "unsafe" dunce cap to shame you for being a naughty, lazy programmer. It's unergonomic and you lose the differentiating features of Rust. Writing safe bindings is painful, and using community written ones tends to pull in dozens of dependencies. If you're interfacing a C library and want some extra features there are many languages that care far more about the developer experience than Rust.
That's bizarrely emotional. It's a language feature that allows you to do things the compiler would normally forbid you from doing. It's there because it's sometimes necessary or expedient to do those things.
You just have to get over that. `unsafe` means "compiler cannot prove this to be safe." FFI is unsafe because the compiler can't see past it.
> Once you're doing interop with C you're just writing C with Rust syntax
Just like C++, or go, or anything else. You can choose to wrap it, but that's just indirection for no value imo. I honestly hate seeing C APIs wrapped with "high level" bindings in C++ for the same reason I hate seeing them in Rust. The docs/errors/usage are all in terms of the C API and in my code I want to see something that matches the docs, so it should be "C in syntax of $language".
I upvoted your general response but this line was uncalled for. No need to muddy the waters about X11 -> Wayland with the relentlessly debated, interminable, infernal init system comparison.
GNOME, on the other hand, practically wants everything running on the exact same software stack, so it requiring a package means nothing.
This is only true most of the time - some languages have properties which "leak" to user.
Like if it's Java process, then sooner or later user will have to mess with launchers and -Xmx option.
Or if it's a process which has lots of code and must not crash, language matters. C or C++ would segfault on any sneeze. Python or Ruby or even Java would stay alive (unless they run out of memory, or hang due to a logic bug)
I think this is true but also maybe not true at the same time.
For one thing, programming languages definitely come with their own ecosystems and practices that are common.
Sometimes, programming languages can be applied in ways that basically break all of the "norms" and expectations of that programming language. You can absolutely build a bloated and slow C application, for example, so just using C doesn't make something minimal or fast. You can also write extremely reliable C code; sqlite is famously C after all, so it's clearly possible, it just requires a fairly large amount of discipline and technical effort.
Usually though, programs fall in line with the norms. Projects written in C are relatively minimal, have relatively fewer transitive dependencies, and are likely to contain some latent memory bugs. (You can dislike this conclusion, but if it really weren't true, there would've been a lot less avenues for rooting and jailbreaking phones and other devices.)
Humans are clearly really good at stereotyping, and pick up on stereotypes easily without instruction. Rust programs have a certain "feel" to them; this is not delusion IMO, it's likely a result of many things, like the behaviors of clap and anywho/Rust error handling leaking through to the interface. Same with Go. Even with languages that don't have as much of a monoculture, like say Python or C, I think you can still find that there are clusters of stereotypes of sorts that can predict program behavior/error handling/interfaces surprisingly well, that likely line up with specific libraries/frameworks. It's totally possible to, for example, make a web page where there are zero directly visible artifacts of what frameworks or libraries were used to make it. Yet despite that, when people just naturally use those frameworks, there are little "tells" that you can pick up on a lot of the time. You ever get the feeling that you can "tell" some application uses Angular, or React? I know I have, and what stuns me is that I am usually right (not always; stereotypes are still only stereotypes, after all.)
So I think that's one major component of why people care about the programming language that something is implemented in, but there's also a few others:
- Resources required to compile it. Rust is famously very heavy in this regard; compile times are relatively slow. Some of this will be overcome with optimization, but it still stands to reason that the act of compiling Rust code itself is very computationally expensive compared to something as simple as C.
- Operational familiarity. This doesn't come into play too often, but it does come into play. You have to set a certain environment variable to get Rust to output full backtraces, for example. I don't think it is part of Rust itself, but the RUST_LOG environment variable is used by multiple libraries in the ecosystem.
- Ease of patching. Patching software written in Go or Python, I'd argue, is relatively easy. Rust, definitely can be a bit harder. Changes that might be possible to shoehorn in in other languages might be harder to do in Rust without more significant refactoring.
- Size of the resulting programs. Rust and Go both statically link almost all dependencies, and don't offer a stable ABI for dynamic linking, so each individual Rust binary will contain copies of all of their dependencies, even if those dependencies are common across a lot of Rust binaries on your system. Ignoring all else, this alone makes Rust binaries a lot larger than they could be. But outside of that, I think Rust winds up generating a lot of code, too; trying to trim down a Rust wasm binary tells you that the size cost of code that might panic is surprisingly high.
So I think it's not 100% true to say that people don't care about this at all, or that only people who are bored and like to argue on forums ever care. (Although admittedly, I just spent a fairly long time typing this to argue about it on a forum, so maybe it really is true.)
Kids these days... trolling used to require what's now called effortposts.
At best, it seems like a huge diversion of time and resources, given that we already had a working GUI. (Maybe that was the intention.) The arguments for it have boiled down to "yuck code older than me" from supposed professionals employed by commercial Linux vendors to support the system, and it doesn't have Android-like separation — a feature no one really wants.
The mantra of "it's a protocol" isn't very comforting when it lacks so many features that necessitate workarounds, leading to fragmentation and general incompatibility. There are plenty of complicated, bad protocols. The ones that survive are inherently "simple" (e.g., SMTP) or "trivial" (e.g., TFTP). Maybe there will be a successor to Wayland that will be the SMTP to its X400, but to me, Wayland seems like a past compromise (almost 16 years of development) rather than a future.
Furthermore, all of these options can be enabled individually on multiple screens on the same system and still offer a good mix-used environment. As someone who has been using HiDPI displays on Linux for the past 7 years, wayland was such a game changer for how my system works.
Libreoffice includes support for gtk3, gtk4, Qt6, and other backends: https://github.com/LibreOffice/core/blob/master/vcl/README.m...
Maybe you need to try wayland with an alternative backend?
Development of X11 has largely ended and the major desktop environments and several mainstream Linux distributions are likewise ending support for it. There is one effort I know of to revive and modernize X11 but it’s both controversial and also highly niche.
You don’t have to like the future for it to be the future.
It's nowhere near the modline hell of XFree86.
Making manual changes in 2015+, for a protocol released in 1987, that's a long time having rough edges..
Until recently i just switched back to X whenI had problems with Wayland. The last time the issues fixed itself on the next update.
And sadly wayland decided to just not learn any lessons from X11 and it shows.
Not having enough maintainers, and some design issues that can't be solved are both reasons why X was left largely unmaintained.
There were a lot of MRs with valuable changes however Red Hat wanted certain features to be exclusive to Wayland to make the alternative more appealing to people so they actively blocked these MRs from progressing.
> someone could have forked the project and be very happy with all the changes, right?
That's precisely what happened, one of the biggest contributors and maintainers got bullied by Red Hat from the project for trying to make X11 work and decided to create X11Libre (https://github.com/X11Libre/xserver) which is now getting all these fancy features that previously were not possible to get into X11 due to Red Hat actively sabotaging the project in their attempt to turn Linux into their own corporate equivalent of Windows/macOS.
This “blessed successor” without and detrimental effects as a main goal: that’s pretty close to my understanding of the project. IIRC some X people were involved from the beginning, right?
I guess we’ll see if that development is ever applied to the main branch, or if it supplants the main X branch. At the moment, though… if that’s the future of X, then it is fair to be a little bit unsure if it is going to stick, right?
there is a reason lead developer of X11Libre left Xorg project, they did not like broken code: https://gitlab.freedesktop.org/xorg/xserver/-/issues/1760 (and many more if you search)
The OpenBSD people are still working on Xenocara, and it introduces actual security via pledge system calls.
Funny enough, the my first foray into these sort of operating systems was BSD, but it was right when I was getting started. So I don’t really know which of my troubles were caused by BSD being tricky (few probably), and which were caused by my incompetence at the time (most, probably). One of these days I’ll try it again…
Also, by "commercial linux vendors", you do realize Wayland is directly supported (afaik, correct me if wrong) by the largest commercial linux contributors, Red Hat, Canoncial. They're not simply 'vendors'.
I don't know if others have experienced this but the biggest bug I see in Wayland right now is sometimes on an external monitor after waking the computer, a full-screen electron window will crash the display (ie the display disconnects).
I can usually fix this by switching to another desktop and then logging out and logging back in.
Such a strange bug because it only affects my external monitor and only affects electron apps (I notice it with VSCode the most but that's just cause I have it running virtually 24/7)
If anyone has encountered this issue and figured out a solution i am all ears.
That's like saying "the website doesn't work", without saying what browser you are using.
It's certainly a feature I want. Pretty sure I'm not alone in wanting isolation between applications--even GUI ones. There's no reason that various applications from various vendors shouldn't be isolated into their own sandboxes (at least in the common case).
You mean like the code that the Manchester Baby, ENIAC, the Manchester Mark 1, EDSAC and EDVAC ran? Or maybe Plankalkül...
- Having a single X server that almost everyone used lead to ossification. Having Wayland explicitly be only a protocol is helping to avoid that, though it comes with its own growing pains.
- Wayland-the-Protocol (sounds like a Sonic the Hedgehog character when you say it like that) is not free of cruft, but it has been forward-thinking. It's compositor-centric, unlike X11 which predates desktop compositing; that alone allows a lot of clean-up. It approaches features like DPI scaling, refresh rates, multi-head, and HDR from first principles. Native Wayland enables a much better laptop docking experience.
- Linux desktop security and privacy absolutely sucks, and X.org is part of that. I don't think there is a meaningful future in running all applications in their own nested X servers, but I also believe that trying to refactor X.org to shoehorn in namespaces is not worth the effort. Wayland goes pretty radical in the direction of isolating clients, but I think it is a good start.
I think a ton of the growing pains with Wayland come from just how radical the design really is. For example, there is deliberately no global coordinate space. Windows don't even know where they are on screen. When you drag a window, it doesn't know where it's going, how much it's moving, anything. There isn't even a coordinate space to express global positions, from a protocol PoV. This is crazy. Pretty much no other desktop windowing system works this way.
I'm not even bothered that people are skeptical that this could even work; it would be weird to not be. But what's really crazy, is that it does work. I'm using it right now. It doesn't only work, but it works very well, for all of the applications I use. If anything, KDE has never felt less buggy than it does now, nor has it ever felt more integrated than it does now. I basically have no problems at all with the current status quo, and it has greatly improved my experience as someone who likes to dock my laptop.
But you do raise a point:
> It feels like a regression to me and a lot of other people who have run into serious usability problems.
The real major downside of Wayland development is that it takes forever. It's design-by-committee. The results are actually pretty good (My go-to example is the color management protocol, which is probably one of the most solid color management APIs so far) but it really does take forever (My go-to example is the color management protocol, which took about 5 years from MR opening to merging.)
The developers of software like KiCad don't want to deal with this, they would greatly prefer if software just continued to work how it always did. And to be fair, for the most part XWayland should give this to you. (In KDE, XWayland can do almost everything it always could, including screen capture and controlling the mouse if you allow it to.) XWayland is not deprecated and not planned to be.
However, the Wayland developers have taken a stance of not just implementing raw tools that can be used to implement various UI features, but instead implement protocols for those specific UI features.
An example is how dragging a window works in Wayland: when a user clicks or interacts with a draggable client area, all the client does is signal that they have, and the compositor takes over from there and initiates a drag.
Another example would be how detachable tabs in Chrome work in Wayland: it uses a slightly augmented invocation of the drag'n'drop protocol that lets you attach a window drag to it as well. I think it's a pretty elegant solution.
But that's definitely where things are stuck at. Some applications have UI features that they can't implement in Wayland. xdg-session-management for being able to save and restore window positions is still not merged, so there is no standard way to implement this in Wayland. ext-zones for positioning multi-window application windows relative to each-other is still not merged, so there is no standard way to implement this in Wayland. Older techniques like directly embedding windows from other applications have some potential approaches: embedding a small Wayland compositor into an application seems to be one of the main approaches in large UI toolkits (sounds crazy, but Wayland compositors can be pretty small, so it's not as bad as it seems) whereas there is xdg-foreign which is supported by many compositors (Supported by GNOME, KDE, Sway, but missing in Mir, Hyprland and Weston. Fragmentation!) but it doesn't support every possible thing you could do in X11 (like passing an xid to mpv to embed it in your application, for example.)
I don't think it's unreasonable that people are frustrated, especially about how long the progress can take sometimes, but when I read these MRs and see the resulting protocols, I can't exactly blame the developers of the protocols. It's a long and hard process for a reason, and screwing up a protocol is not a cheap mistake for the entire ecosystem.
But I don't think all of this time is wasted; I think Wayland will be easier to adapt and evolve into the future. Even if we wound up with a one-true-compositor situation, there'd be really no reason to entirely get rid of Wayland as a protocol for applications to speak. Wayland doesn't really need much to operate; as far as I know, pretty much just UNIX domain sockets and the driver infrastructure to implement a WSI for Vulkan/GL.
I understand the frustration, but I see a lot of "it's completely useless" and "it's a regression", though to me it really sounds like Wayland is an improvement in terms of security. So there's that.
For me, this is a real reason not to want to be forced to use Wayland. I'm sure the implementation of Wayland in xfce is a long time off, and the dropping of Xwindows even further off, so hopefully this problem will have been solved by then.
Do you know if global shortcuts are solved in a satisfactory way, and if there easy mechanism for one application to query wayland about other applications.
One hack I've made a while ago was to bind win+t command to a script that queried the active window in the current workspace, and based on a decision opened up a terminal at the right filesystem location, with a preferred terminal profile.
All I get from llms is that dbus might be involved in gnome for global shortcuts, and when registering global shortcuts in something like hyperland app ids must be passed along, instead of simple scripts paths.
https://flatpak.github.io/xdg-desktop-portal/docs/doc-org.fr...
This should work with Hyprland provided that you are using xdg-desktop-portal-hyprland, as it does indeed have an implementation of GlobalShortcuts.
I'm not sure if this API is sufficient for your needs, or if it is too much of a pain to use. Like many Wayland things, it prescribes certain use cases and doesn't handle others. The "configure" call seems to rely on xdg-foreign-unstable-v2 support, but AFAIK Hyprland doesn't support this protocol, so I have no idea what you're supposed to do on Hyprland for this case.
I am sorry to see developers have to deal with things in a relatively unfinished state, but such is the nature of the open source desktop.
Odd. Xorg still works fine [0], and we'll see how XLibre pans out.
[0] I'm using it right now, and it's still getting updates.
They intentionally don't want you to keep using X11, and they'll keep turning up the heat on the pot until we're all boiling.
Gnome just removed the middle-click paste option. Is that because they fixed the clipboard situation on Linux, and there's a universal, unambiguous way of cut and paste that works across every application? No. It's because middle-click to paste is an "X-ism." This is just demagoguery and unserious.
They disabled it by default. You can enable it if you want.
Once again, Gentoo Linux proves (somewhat regrettably) to be one of the best Linux distros out there. OpenRC and Xorg as defaults, with SystemD and Wayland as supported options is quite a lovely way to do things.
> Gnome just removed the middle-click paste option.
Gnome removes useful things all the time. "The Gnome folks do something user-hostile just because they feel like it" isn't news; that's been going on for decades. This habit of theirs is a big reason why I've been using KDE for a very long time.
Citation. None of the other desktops have slowed with Wayland, and gaming is as fast as, if not marginally faster on KDE/Gnome with Wayland vs LXDE on X.
But as far as it performing worse overall, I don't think that would be expected. Compositing itself does lean more on hardware acceleration to provide a good experience, though, so if you compare it on a machine that has no hardware accelerated graphics with compositing disabled, then it really would be worse, yeah.
1st: "enable display compositing" option - this one increases latency as every window draw need go though compositor application (in nutshell it exchanging opengl textures - only synchronization messages goes over "wire")
2nd: the Xserver rendering pipeline compositor, this one goes with modesetting (intel, amdgpu) driver TearFree option - almost everything inside X11 server in OpenGL textures and compositor perform direct blending to screen (including direct scanout).
What I want to tell, on modern X (there are merge requests for Xorg server to modesetting driver, amdgpu have this code) with TearFree enabled you by default optimal hardware acceleration - there comes lower latency
Wayland has lots of potential, but it's far from ready to replace X11, especially in multitasking environments. XFCE is taking their time, because their community is more very concerned stability.
I predict that XFCE will default to X11 until Wayland has reached broad feature parity, then default to Wayland but keep X11 support until the last vestages of incompatibility are delt with.
There's no reason that this wouldn't be accepted by their community, and it should be lighter weight, in the end.
With that knowledge, I'm certain that XFCE will remain lightweight. It can be done, so I feel confident that the XFCE folks will get it done.
(Instead of seeing this as "xfce jumps on bandwagon", I'm seeing it more as "bandwagon finally stable enough for xfce".)
On X, we had Xorg and that is it. But at least Xorg did a lot of the work for you.
On Wayland, you in theory have to do a lot more of the work yourself when you build a compositor. But what we are seeing is libraries emerge that do this for you (wlroots, Smithay, Louvre, aquamarine, SWC, etc). So we have this one man project expecting to deliver a dev release in just a few months (mid-2026 is 4 months from now).
But it is not just that we have addressed the Wayland objection. This project was able to evaluate alternatives and decide the smithay is the best fit both for features and language choice. As time goes on, we will see more implementations that will compete with each other on quality and features. This will drive the entire ecosystem forward. That is how Open Source is supposed to work.
Such an effort to rethink Linux desktop alone could've been a major project on its own but as having something was necessitated by Wayland all of it has become hurried and lacking control. Anything reminiscent of a bigger and more comprehensive project is in initial stages at best. If Wayland has been coming on for about ten years now I'll give it another ten years until we have some kind of established, consistent desktop API for Linux again.
X11 did offer some very basic features for a desktop environment so that programs using different toolkits could work together, and enough hooks you could implement stuff in window managers etc. Yet there was nothing like the more complete interfaces of the desktops of other operating systems that tied everything together in a central, consistent way. So, Linux desktop interface was certainly in need for a rewrite but the way it's happening is just disheartening.
When Apple dropped the old audio APIs of classic macOS and introduced CoreAudio, they pissed off a lot of developers, but those developers had no choice. In the GUI realm, they only deprecated HIKit for a decade or two before removing it (if they've even done that), but they made it very clear that CoreFoo was the API you should be using and that was that.
In Linux-land, nobody has that authority. Nobody can come up with an equivalent to Core* for Linux and enforce its use. Consequently, you're going to continue to see the Qt/GTK/* splits, where the only commonality is at the lowest level of the window system (though, to Qt's credit, optionally also the event loop).
Only yesterday I was wondering how it is that my brightness keys work in my desktop environment, when /sys/class/backlight/intel_backlight/brightness is only writeable by root. The (somewhat horrifying) answer is that applications can send a request to logind over DBUS, which checks the request against opaque and arbitrarily byzantine Polkit rules, and then writes to the sysfs file on the application's behalf, which it can do because it runs as root. It's unclear quite what this achieves that simply making the file writeable by the "video" group does not, but hey at least systemd gets to be involved.
Incidentally, the correct command to change the brightness as a normal user from the command line is as follows:
busctl --timeout=1 call org.freedesktop.login1 /org/freedesktop/login1/session/self org.freedesktop.login1.Session SetBrightness ssu "backlight" "intel_backlight" <brightness>
So simple, so easy to remember, so superior to "echo <brightness> > /sys/class/backlight/intel_backlight/brightness". Google it for a fun thread on why the --timeout=1 is neccessary (it won't work without it!) - although I suppose I should be thankful for that little foible, as without it the thread wouldn't exist and I would never have figured out the command in the first place.Sure, systemd is involved in running the system on which those applications run, but the discussion was about some sort of equivalent to the unified GUI stack offered by macOS (the Core* frameworks) which are used by essentially every GUI application on that platform. Linux doesn't have that, and there's nobody in a position to force that on developers. systemd has nothing to do with this.
Systemd is certainly relevant to the Linux desktop as a whole, especially regarding logind. But there's no specific relation to GUI desktop applications that I'm aware of at least.
There are user services, but that's a separate concept.
Unfortunately there aren't enough developers to maintain all those duplicate implementations to the level users expect so a lot of features will be missing and a lot of maintainers will burn out. Not having a libcompositor remains Wayland's biggest mistake.
This is vaguely a double-edged sword. Yes, more code duplication across disparate projects - but that also allows people who _really care_ (such as the xfce team) to roll up their sleeves and do more. Any WM will only ever be as good as the X11 baseline, Wayland servers have the opportunity to compete on this front.
Although I'm probably permanently stuck with the Niri workflow, I am looking forward to seeing what the xfce developers come up with.
Yes, the stack gets you most of the way there. No, you won't be happy if you need to actually make changes to any part of that other than the top layer.
X didn't have any of that to build from. It basically was a second kernel, was the OS that dealt with the video card atop the OS actual. It talked to the PCI device & did everything.
Part of the glory of Wayland is that we have fantastic really good OS abstractions & drivers. When we go to make a display server, we start at such a different level now. Trying to layer X's abstractions atop is messy & ugly & painful, because it mostly inhibits devs from being able to use the hardware in neat efficient clean direct modern ways. You have to write an X extension that coexists with a patchwork of other extensions that slots into the X way, that can figure out how to leverage the hardware. With Wayland, most compositors just use the kernel objects. There's much less intermediary, much less cruft, much less wild indirection & accretion to cut a path for.
And as you beautifully state, competing libraries can decide what abstractions & techniques work for them. There's an ecosystem of ideas, a flux to optimize hone & improve, on a variety of different dimensions. The Bazaar free to find its way vs the one giant ancient Cathedral. It's just so so so good we're finally not all trapped inside.
Tl;dr: Wayland has a much higher level that it can start from. And trying to use gpu's & hardware well in X was a nightmare because X has a sea of abstractions - extensions that you had to target & develop around, making development in X a worst of both worlds low level but having to cope with a so many high level constructs you had to navigate through.
I only fear that this is manifestation of a wider phenomenon when new software developers are unable to maintain software created by old software developers. If that is so, they will try to simplify the software to what they can actually maintain and rewrite it into a form in which they can maintain it.
If i assume this is true, then all of this is annoying, but actually makes sense: Wayland is simpler than X11, so people will tend to maintain Wayland-related software rather than X11-related. Rust won't let unskilled coders to make some mistakes, so from their point of view it is going to be simpler to rewrite something in Rust.
Although, goodbye network-transparency, goodbye performance, goodbye stability. Oh well, but it's that time of the year.
Great to know there's work on the wayland support front.
Also, writing it in Rust should help bring more contributors to the project.
If you use Xfce I urge you to donate to their Open Collective:
In case you weren't there, the "even" kernels (e.g. 2.0, 2.2, 2.4, and 2.6) were the stable series while the "odd" kernels (e.g. 2.1, 2.3, 2.5) were the development series, the development model was absolutely mental and development moved at a glacial pace compared to today's breakneck speed.
The pre-git days were less than ideal. The BitKepper years were... interesting, politically and philosophically speaking.
Also, KDE4 was a dark, dark period.
I left Gnome 3 for other WMs (eventually settled on cinnamon), but every once in a while I decided to give Gnome 3 a try, just to be disappointed again. I felt like those people in abusive romantic relationships that keep coming back and divorcing over and over again. "Oh, Gnome has really changed now, he won't beat me again this time!".
Then we'll make Wayland 2.
It was partially made for car infotainment systems that are knowingly weak hardware.
Nvidia's driver do something weird on Wayland when my laptop is connected to HDMI, probably something funky with the iGPU<->dGPU communication. Everything works, but at the whims of Nvidia an update reduces the maximum FPS I can achieve over HDMI to about 30-45fps. Jittery and painful, even on a monitor that supposedly supports VRR.
That's not really Wayland's fault of course, but in the same way Linux is broken because Photoshop doesn't work on it, Wayland is broken for many users because their desktop is weird on it.
Depending on your DE, you have a choice not to use Wayland. Like, yes, if you use GNOME then you don't get choices but that's their whole ethos, and unfortunately I've heard about KDE dropping X, but there are other options and as I type this comment in i3 I can assure you Xorg still works.
This is nothing like wayland where the APIs to do what you want may not even exist, or may not exist in some random compositor a user is using.
I will seek to dive-in to how Wayland API actually works, because I'd really like to know what not to do, when the wrappers used 'wrong' can crash.
I have an old Thinkpad. Firefox on X is slow and scrolls poorly. On wayland, the scrolling is remarkably smooth for 10 y/o hardware, and the addition of touchpad gestures is very nice. Yes, there's more configuration overhead for each compositor, but I'm now accepting this trade.
Could you expand on why you describe Hyprland and XFCE4 as "a cursed combination"? Might provide some insight as to why the official XFCE project decided to create their own compositor.
If an application is written for Wayland, is there a way to send its windows to (e.g.) my Mac, like I can with X11 to XQuartz?
Rather than going fully protocol-based (like Waypipe), they used Weston to render to RDP. Using RDP's "remote apps" functionality, practically any platform can render the windows. I think it's a pretty clever solution, one perhaps even better than plain X11 forwarding (which breaks all kinds of things like GPU acceleration).
I don't know if anyone has messed with this enough to get it to work like plain old RemoteApps for macOS/BSD/Windows/Linux, but the technology itself is clearly ready for it.
Currently I can:
$ ssh -X somehost xeyes
and get a window on macOS.X's network transparency was made at a time when we drawn two lines as UI, and for that it works very well. But today even your Todo app has a bunch of icons that are just bitmaps to X, and we can transfer those via much better means (that should probably not be baked into a display protocol).
I think Wayland did the correct decision here. Just be a display protocol that knows about buffers and that's it.
User space can then just transport buffers in any way they seem fit.
Also, another interesting note, the original X network transparency's modern analogue might very well be the web, if you look at it squinted. And quite a few programs just simply expose a localhost port to avoid the "native GUI" issue wholesale.
I used run and use diskless SparcStation 5s with remote X on 10BASE2 network with the binaries running on Sun E3500s: it worked well enough for non-video web sites running Netscape 3.x. Also Matlab, Octave, Emacs, Vi(m), etc.
I've used it to run backup application GUIs when I was still on DSL (<25Mbps) displaying at home many years ago, and it worked well then. I now have >100Mbps fibre at home, so doubt that bandwidth (or even latency) is worse.
You surely agree that not having a good compression here is less than ideal.
And it begs the question whether this is indeed the task of the display manager, or it's packing an unrelated functionality that could be better solved by another software.
And we haven't even gotten to sound - should a display manager now suddenly also handle sound?
The icons: you allocate memory on server for that and do not transfare the icon everytime. I think x11 works like that, not sure.
I know GUI lib that you can still compile with freetype disabled. Not everyone need the GUIs you talking about. Everyone is using cars, so lets ban bikes.. it does not need to be like that.
I find X11 RPC useful, simple UI is ok.. you can write programs that will run on any slow or not computer, remotely. Web is not that simple, it is different way of programming, it is not transparent. Web is useful for commerce, but not for controlling machines at factories or pilot cabins. IMO.
And sure, simple UIs have their place - but they will also work just as well with a proper transport protocol, hell, they would compress even better. So just waypipe that simple UI as you see it fit.
So you say compression of said icons, etc, is better than caching them on the server? No.. You've mentioned web, but no one does that on the web.
This is extremely misleading. Web browsers (and games) are the worst case for X11's network transparency. The overwhelming majority of applications belong in the same category as xeyes.
> the original X network transparency's modern analogue might very well be the web
It's Arcan, which solved this problem without sacrificing network transparency at the altar.
Well, I'm not sure you are using that many xmotif apps. Most of the GUI programs are gtk/qt (and let's be honest, electron) - and they are mostly bitmaps to X's eyes (pun not intended). They don't use draw commands with such a small granularity that network transparency would benefit.
And Arcan is so many things at once I'm not convinced it is a good alternative to Wayland. It has good ideas, but they sort of require the whole package. Meanwhile Wayland is just a minimal API over the Linux kernel API for managing display buffers, that can be extended with additional protocols.
That's a bit of a double edged sword, it's the exact reason why I don't think Wayland is a good alternative to X. Wayland's minimalist attitude towards responsibility is good for one thing, and that's implementing new compositors from scratch. The bare bones compositor will be a long way from usable, but it will be technically complete. The question is, does it matter to me that there are 30 different compositors? Each in various states with their own eclectic featureset with no guarantees given, a la USB-C? Not really. In effect, it did present me with the conundrum of choosing between a half-baked compositor (dwl) or a desktop experience I have literally zero interest in (Gnome/KDE) which left me with a sour taste in the mouth.
Moving beyond that, a real problem with Wayland's architectural minimalism is that a display server does more than simply abstract a single API. It provides a lot of rather complex features, from accessibility to input handling. Not every compositor is capable of handling that kind of complexity, especially if it's to work well. What we will find going forward are two possible futures:
1. The future resembles the present status quo of fragmentation, made worse by time. The compositor archipelago is here to stay, and deploying software in this environment has become excessively annoying. There are 10 different competing libraries for any given category of basic infrastructure, and each library in each category has their own ideas and idiosyncrasies that have to be worked around. Most of them are buggy and incomplete.
2. Smaller compositors effectively die off, and we are left with a single monolithic compositor, back to the modular Window Manager/Desktop Environment. This monster implements the defacto Extended Wayland protocol, where all of the different parts don't quite match each other very well because that's the cost gained from not having vertical integration of complicated components. There's no cohesive rhyme or reason to the design of anything. Thus, the exercise in minimalism has wrought an uglier and more complicated beast than Xorg itself.
I think it's clear Wayland is going to continue to sweep the Linux desktop given its massive corporate backing. But I'm not really compelled to run a bare Wayland compositor under any circumstances, because my Arcan server already works perfectly fine as a Wayland compositor. It works as any number of Wayland compositors running whatever extensions they implement. In effect, no matter which future we end up in, by using Arcan I'm in a much better position than someone running a normal compositor. This fact alone makes me favor Arcan, even before we get into its unique merits.
`$ waypipe ssh somehost foot`
You need waypipe installed on both machines. For the Mac, I guess you'll need something like cocoa-way (https://github.com/J-x-Z/cocoa-way). Some local Wayland compositor, anyway.
I'm not sure how much farther along they are than that post though.
Absolutely seriously. To me, a big part of what makes Xfce is xfwm4's behavior. Even though most of the other Xfce components will run decently well on wlroots-based compositors, I don't really have an interest in using them, as that's not "Xfce" to me.
But it's not going to be perfect, though, as some things that we take for granted on x11 still just do not have Wayland protocols to enable them. This will take a long time. Alex's blog post says a developer preview around the middle of this year, and I expect I can deliver on that, and maybe (maybe!) even a stable release by next year (maybe!), but full feature parity will take years.
• smithay has great documentation.
Not only are they considering it, but they're expressly calling it out. I'm convinced that the publication of the Agile Manifesto was an exercise in Cunningham's Law, and to that end the XFCE team has produced something great by doing the opposite.GNOME was cool during the sawfish days.
Now the last 3 times I tried Wayland everything ended up a blurry mess and some windows just ended up the wrong size, so.
I suppose I'll just keep holding out hope.
- speed
- memory consumption
- simplicity to use
- customisability
- if it's X11 or Wayland
If everything above the last remains the same in the Wayland version, I stay, else there is LXDM.
I hope XFCE preserves this, it is a killer feature in today's world.
I wonder how long it'll take them writing a compositor from scratch.
I spent a month or so in 2024 attempting to refactor xfwm4 so it could serve dual purpose as both an X11 window manager and Wayland compositor, and ended up abandoning the idea. It was just getting ugly and hard to read and understand, and I wasn't confident that I could continue to make changes without introducing bugs (crashers, even). We want X11 users to be unaffected by this, and introducing bugs in xfwm4 wouldn't achieve that goal.
Note that we don't have to rewrite all of Xfce: xfce4-session, xfce4-panel, xfdesktop, etc. will all run on Wayland just fine (there are some rough edges that need to be ironed out for full Wayland support, but they're already fairly usable on wlroots-based compositors). This is just (heh, "just") building a compositor and porting xfwm4's WM behavior and UI elements over to it. Not a small task, to be sure, but much smaller than "rewriting all of Xfce".
I've been using popos for a while, but xfce will always have a place in my heart.
If it had tiling support I'd probably use it still. Being so lightweight is a massive boon.
What would you have them replace it with?
They both have kinda similar roots in that XFCE originally used XForms which was an open source replacement of the SGI Forms library while FLTK also started as a somewhat compatible/inspired opensource replacement of SGI Forms in C++.
If they ever move away from GTK (due to the GNOME shenanigans GNOME-izing GTK) I wish Englightenment and Xfce were together a single thing. But that's if I could ask the Tux genie three wishes.
But frankly I think forking and maintaining GTK3 is preferable to moving to EFL or Qt. GIMP is still on GTK3. MATE is still on GTK3. Inkscape is still on GTK3 (but GTK4 work is in progress). Evolution is still on GTK3.
I think GTK3 will be around for a long time.
I'm also not a big fan of Wayland, to be honest. But that's the way the winds are blowing. X11 has its problems, but even if they are fixable, no one seems to want to work on Xorg anymore. I'm certainly not prepared to maintain it and push it forward. Are you?
Depending on Xorg today is more or less ok, but I do expect distros will stop shipping it eventually.
Are you also willing to maintain it?
Do note that I've never tried to croudfund a programmer, but that's something that I have to believe is possible to do.
[0] <https://github.com/X11Libre/xserver?tab=readme-ov-file#i-wan...>
[0] https://www.phoronix.com/news/X.Org-Server-Lots-Of-Reverts
Maybe XLibre will be a damn trainwreck, or maybe it'll be to xorg what xorg was to XFree86. I intend to find out through the testimony of users of XLibre.
[0] ...or maybe just a very vocal subset of the folks at FDO...
People like to frame things like the waylands are some sort of default and nothing is being lost and no one is being excluded.
The cognitive dissonance I perceive goes like "No one is being paid to work on X11, therefore I should volunteer to work on Wayland."
https://www.theregister.com/2025/06/12/ubuntu_2510_to_drop_x...
https://itsfoss.com/news/fedora-43-wayland-only/
Kde Plasma 6.8 dropping X11:
https://itsfoss.com/news/kde-plasma-to-drop-x11-support/
Suse dropping X11:
https://documentation.suse.com/releasenotes/sles/html/releas...
The beginning of the end, or are there plain and simple alternative microsoft rust compilers? Is microsoft rust syntax at least as simple than C?
Or the right way will be to use an alternative wayland compositor with the rest of xfce?
That's a fair criticism sometimes, but, frankly, if you want things the way you want them, learn to code and dig in. Otherwise it's not really fair of you to complain about stuff that people have built for you for free, in their spare time.
In this particular case, it's not fully a "new and shiny, must play!" situation. I personally am not even a big fan of Wayland, and I'm generally highly critical of it. But Xorg is more or less unmaintained, and frankly, if we don't have a Wayland compositor, we'll become obsolete eventually. That's just the way the wind is blowing.
I trust you understand that some readers may not find (to paraphrase) "I don't like it either but it is what it is." a compelling reason to fix something that is not broken.
I also do not agree with the Wayland is inevitable sentiment. There are non-systemd distros, there will also be non-Wayland distros. The idea is that only those things survive which are pushed into the ecosystem by the cooperate bullies is wrong, otherwise Linux would not exist.
The Linux desktop was essentially fine already two decades ago and instead of the needed refinements, bug fixing, and polishments, we get random changes in technology after the other, so nothing ever really improves but we incrementally lose applications which do not keep up, break workflows, sometimes even regress in technology (network transparency), and discourage people from investing into applications because the base is not stable. My hope was that Xfce4 is different, but apparently this was unfounded.
Re-read your original post. You are absolutely complaining about what we do in our spare time.
> If the blog post said "someone does this because he likes to spend his own time on it", I would not complain.
I mean, that's part of it. I wouldn't do it if I wasn't interested in doing it. I have my own long list of Wayland criticisms, but I think it's interesting.
> I also do not agree with the Wayland is inevitable sentiment.
I think that's where we'll be at an impasse.
There are non-systemd distros because there are viable alternatives. Xorg (the server implementation, I mean, not X11 the protocol/system) is dead. I don't like saying that. I've invested a lot of time into X11 and understanding how it works, and how Xorg works. But no one wants to maintain it. There is the XLibre fork, and I wish them well, and do want them to succeed, but sustaining a fork is hard, and only time will tell if that works out.
But I don't think X11 has a future, unfortunately. And that really does make me sad. You're free to disagree with that, but... well, so what.
> The Linux desktop was essentially fine already two decades ago and instead of the needed refinements
That's a view through rose-tinted glasses if I ever saw one.
> we get random changes in technology after the other
Jamie Zawinski called this the "Cascade of Attention-Deficit Teenagers", and he's right. I do think some of these changes are an earnest and honest attempt to make things better, but yes, people just want to work on what interests them, and what makes them feel good and accomplished.
When we work for a corporation we don't really get to do that, but when it's unpaid, spare-time volunteer work, we have the freedom to do whatever makes us happy, even if it makes other people mad or disappointed or annoyed, or isn't the most "productive" use of our time (whatever that means).
::shrug::
Even though you're trying to get Debian installed on the thing I'd also refer to the Arch wiki for information on how to get things working right:
https://wiki.archlinux.org/title/Chrome_OS_devices
Arch being what it is it has attracted a host of knowledgeable users who have collected their information about how to get things working on different systems in an organised and usually comprehensive way on that wiki. Much if not most of what is written there is also applicable to getting non-Arch distributions running on those systems.