It's strange that Wayland didn't do it this way from the start given its philosophy of delegating most things to the clients. All you really need to do arbitrary scaling is tell apps "you're rendering to a MxN pixel buffer and as a hint the scaling factor of the output you'll be composited to is X.Y". After that the client can handle events in real coordinates and scale in the best way possible for its particular context. For a browser, PDF viewer or image processing app that can render at arbitrary resolutions not being able to do that is very frustrating if you want good quality and performance. Hopefully we'll be finally getting that in Wayland now.
Originally OS X defaulted to drawing at 2x scale without any scaling down because the hardware was designed to have the right number of pixels for 2x scale. The earliest retina MacBook Pro in 2012 for example was 2x in both width and height of the earlier non-retina MacBook Pro.
Eventually I guess the cost of the hardware made this too hard. I mean for example how many different SKUs are there for 27-inch 5K LCD panels versus 27-inch 4K ones?
But before Apple committed to integer scaling factors and then scaling down, it experimented with more traditional approaches. You can see this in earlier OS X releases such as Tiger or Leopard. The thing is, it probably took too much effort for even Apple itself to implement in its first-party apps so Apple knew there would be low adoption among third party apps. Take a look at this HiDPI rendering example in Leopard: https://cdn.arstechnica.net/wp-content/uploads/archive/revie... It was Apple's own TextEdit app and it was buggy. They did have a nice UI to change the scaling factor to be non-integral: https://superuser.com/a/13675
That's an interesting related discussion. The idea that there is a physically correct 2x scale and fractional scaling is a tradeoff is not necessarily correct. First because different users will want to place the same monitor at different distances from their eyes, or have different eyesight, or a myriad other differences. So the ideal scaling factor for the same physical device depends on the user and the setup. But more importantly because having integer scaling be sharp and snapped to pixels and fractional scaling a tradeoff is mostly a software limitation. GUI toolkits can still place all ther UI at pixel boundaries even if you give them a target scaling of 1.785. They do need extra logic to do that and most can't. But in a weird twist of destiny the most used app these days is the browser and the rendering engines are designed to output at arbitrary factors natively and in most cases can't because the windowing system forces these extra transforms on them. 3D engines are another example, where they can output whatever arbitrary resolution is needed but aren't allowed to. Most games can probably get around that in some kind of fullscreen mode that bypasses the scaling.
I think we've mostly ignored these issues because computers are so fast and monitors have gotten so high resolution that the significant performance penalty (2x easily) and introduced blurryness mostly goes unnoticed.
> Take a look at this HiDPI rendering example in Leopard
That's a really cool example, thanks. At one point Ubuntu's Unity had a fake fractional scaling slider that just used integer scaling plus font size changes for the intermediate levels. That mostly works very well from the point of view of the user. Because of the current limitations in Wayland I mostly do that still manually. It works great for single monitor and can work for multiple monitors if the scaling factors work out because the font scaling is universal and not per output.
The standardized protocols are more recent (and of course we heavily argued for them).
Regarding the way the protocol works and something having to be retrofitted, I think you are maybe a bit confused about the way the scale factor and buffer scale work on wl_output and wl_surface?
But in any case, yes, I think the happy camper days are coming for you! I also find the macOS approach attrocious, so I appreciate the sentiment.
When you say you supported this for quite some years was there a custom protocol in KWin to allow clients to render directly to the fractionally scaled resolution? ~4 years ago I was frustrated by this when I benchmarked a 2x slowdown from RAW file to the same number of pixels on screen when using fractional scaling and at least in sway there wasn't a way to fix it or much appetite to implement it. It's great to see it is mostly in place now and just needs to be enabled by all the stack.
> When you say you supported this for quite some years was there a custom protocol in KWin to allow clients to render directly to the fractionally scaled resolution?
Qt had a bunch of different mechanisms for how you could tell it to use a fractional scale factor, from setting an env var to doing it inside a "platform plugin" each Qt process loads at runtime (Plasma provides one), etc. We also had a custom-protocol-based mechanism (zwp_scaler_dev iirc) that basically had a set_scale with a 'fixed' instead of an 'int'. Ultimately this was all pretty Qt-specific though in practice. To get adoption outside of just our stack a standard was of course needed, I guess what we can claim though is that we were always pretty firm we wanted proper fractional and to put in the work.
All major compositors support fractional scaling extension these days which allows pixel perfect rendering afaik, and I believe Qt6 and GTK4 also support it.
https://wayland.app/protocols/fractional-scale-v1#compositor...
As it happens, VirtualBox does have its own scaling setting, but it's pretty bad, in my opinion. But I'm kind of forced to use it because Qt's own scaling just doesn't work in this case.
I'm generally a strong wayland proponent and believe it's a big step forward over X in many ways, but some decisions just make me scratch my head.
Basically scale factor neatly encapsulates things like viewing distance, user eyesight, dexterity, and preference, different input device accuracy, and many others. It is easier to have human say how big/small they want things to be than have gazillion flags for individual attributes and then some complicated heuristics to deduce the scale.
> It is easier to have human say how big/small they want things to be than have gazillion flags for individual attributes and then some complicated heuristics to deduce the scale.
I don't understand why I need gazillion flags, I just set desired DPI (instead of scale). But an absolute metric is almost always better than a relative metric, especially if the relative point is device dependent.
Not all displays accurately report their DPI (or can, such as projectors). Not all users, such as myself, know their monitors DPI. Finally the scaling algorithm will ultimately use a scale factor, so at a protocol level that might as well be what is passed.
There is of course nothing stopping a display management widget/settings page/application from asking for DPI and then converting it to a scale factor, I just don't known of any that exist.
I can guarantee that it is surprising to non-technical users (and a source of frustration for technical users) that the scale factor and UI element size can be completely different on two of the same laptops (just a different display resolution which is quite common). And it's also unpredictable which one will have the larger UI elements. Generally I believe UI should have behave as predictably as possible.
This may take some getting used to if you're familiar with DPI and already know the value you like, but for non-technical users it's more approachable. Not everyone knows DPI or how many dots they want to their inches.
That the 145% is 1.45 under the hood is really an implementation detail.
I challenge you, tell a non-technical user to set two monitors (e.g. laptop and external) to display text/windows at the same size. I will guarantee you that it will take them significant amount of time moving those relative sliders around. If we had an absolute metric it would be trivial. Similarly, for people who regularly plug into different monitors, they would simply set a desired DPI and everywhere they plug into things would look the same instead of having to open the scale menu every time.
I will also say though that in the most common cases where people request mixed scale factor support from us (laptop vs. docked screen, screen vs. TV) there are also other form factor differences such as viewing distance that doesn't make folks want to match DPI, and "I want things bigger/smaller there" is difficult to respond to with "calculate what that means to you in terms of DPI".
For the case "I have two 27" monitors side-by-side and only one of them is 4K and I want things to be the same size on them" I feel like the UI offering a "Match scale" action/suggestion and then still offering a single scale slider when it sees that scenario might be a nice approach.
I actually agree (even though I did not express that in my original post) that DPI is probably not a good "user visible" metric. However, I find that the scaling factor relative to some arbitrary value is inferior in every way. Maybe it comes the fact that we did not have proper fractional scaling support earlier, but we are now in the non-sensical situation that for the same laptop with the same display size (but different resolutions, e.g. one HiDPI one normal), you have very different UI element sizes, simply because the default is now to scale either 100% for normal displays and 200% for HiDPI. Therefore the scale doesn't really mean anything and people just end up adjusting again and again, surely that's even more confusing for non-technical users.
> I will also say though that in the most common cases where people request mixed scale factor support from us (laptop vs. docked screen, screen vs. TV) there are also other form factor differences such as viewing distance that doesn't make folks want to match DPI, and "I want things bigger/smaller there" is difficult to respond to with "calculate what that means to you in terms of DPI".
From my anecdotal evidence, most (even all) people using a laptop for work, have a the laptop next to the monitor and actually adjust scaling so that elements are similar size. Or the other extreme, they simply take the defaults and complain that one monitor makes all their text super small.
But even the people who want things bigger or smaller depending on circumstances, I would argue are better served if the scaling factor is relative to some absolute reference, not the size of the pixels on the particular monitor.
> For the case "I have two 27" monitors side-by-side and only one of them is 4K and I want things to be the same size on them" I feel like the UI offering a "Match scale" action/suggestion and then still offering a single scale slider when it sees that scenario might be a nice approach.
Considering that we now have proper fractional scaling, we should just make the scale relative to something like 96 DPI, and then have a slider to adjust. This would serve all use cases. We should not really let our designs be governed by choices we made because we could not do proper scaling previously.
Tell me, do you not ever use Macs?
This is not even a solved problem on macOS: there is no solution because the problem doesn't happen in the first place. The OS knows the size and the capabilities of the devices and you tell it with a slider what size of text you find comfortable. The end.
It works out the resolutions and the scaling factors. If the users needs to set that individually per device, if they can even see it, then the UI has failed: it's exposing unnecessary implementation details to users who do not need to know and should not have to care.
_Every_ user of macOS can solve this challenge because the problem is never visible. It's a question of stupidly simple arithmetic that I could do with a pocket calculator in less than a minute, so it should just happen and never show up to the user.
I speak very bad Norwegian. I use metric for everything. But once I ordered a pizza late at night in Bergen after a few beers, and they asked me how big I wanted in centimetres and it broke my decision-making process badly. I can handle Norwegian numbers and I can handle cm but not pizzas in cm.
I ended up with a vast pizza that was a ridiculous size for one, but what the hell, I was very hungry. I just left the crusts.
Because certain ratios work a lot better than others, and calculating the exact DPI to get those benefits is a lot harder than estimating the scaling factor you want.
Also the scaling factor calculation is more reliable.
Ahhhhhhhh… so nice.
This is horrifying! It implies that, for some scaling factors, the lines of text of your terminal will be of different height.
Not that the alternative (pretend that characters can be placed at arbitrary sub-pixel positions) is any less horrifying. This would make all the lines in your terminal of the same height, alright, but then the same character at different lines would look different.
The bitter truth is that fractional scaling is impossible. You cannot simply scale images without blurring them. Think about an alternating pattern of white and black rows of pixels. If you try to scale it to a non-integer factor the result will be either blurry or aliased.
The good news is that fractional scaling is unnecessary. You can just use fonts of any size you want. Moreover, nowadays pixels are so small that you can simply use large bitmap fonts and they'll look sharp, clean and beautiful.
That's overly prescriptive in terms of what users want. In my experience users who are used to macOS don't mind slightly blurred text. And users who are traditionalists and perhaps Windows users prefer crisper text at the expense of some height mismatches. It's all very subjective.
It always makes me laugh when apple users say "oh it's become of the great text rendering!"
The last time text rendering was any good on MacOS was on MacOS 9, since then it's been a blurry mess.
That said, googling for "MacOS blurry text" yields pages and pages and pages of people complaining so I am not sure it is that subjective, simply that some people don't even know how good-looking text can look even on a large 1080p monitor
"Great text rendering" is also highly subjective mind you. To me greatness means strong adherence to the type face's original shape. It doesn't mean crispness.
(And when it's an integer multiple, you don't need scaling at all. You just need a font of that exact size.)
The way the terminal handles the (literal) edge case you mention is no different from any other time its window size is not a multiple of the line height: It shows empty rows of pixels at the top or bottom.
Fonts are only a "exact size" if they're bitmap-based (and when you scale bitmap fonts you are indeed in for sampling difficulties). More typical is to have a font storing vectors and rasterizing glyphs to to the needed size at runtime.
Actually, you can’t have exactly 1.785: the scale is a fraction with denominator 120 <https://wayland.app/protocols/fractional-scale-v1#wp_fractio...>. So you’ll have to settle for 1.783̅ or 1.7916̅.
But it's HN, so I appreciate someone linking the actual business!
> Because what is probably 90% of wayland install base only supports communicating integer scales to clients.
As someone shipping a couple of million cars per year running Wayland, the install base is a lot bigger than you think it is :)
I recall the issue is that GTK bakes deep down the fact that pixel scaling is done in integers, while in Qt they are in floats
The reason Apple started with 2x scaling is because this turned out to not be true. Free-scaling UIs were tried for years before that and never once got to acceptable quality. Not if you want to have image assets or animations involved, or if you can't fix other people's coordinate rounding bugs.
Other platforms have much lower standards for good-looking UIs, as you can tell from eg their much worse text rendering and having all of it designed by random European programmers instead of designers.
The web is a free-scaling UI, which scales "responsively" in a seamless way from feature phones with tiny pixelated displays to huge TV-sized ultra high-resolution screens. It's fine.
They did make another attempt at it for apps with Dynamic Type though.
Thinking that two finger zooming style scaling is the goal is probably the result of misguided design-centric thinking instead of user-centric thinking.
More like “let the device driver figure it out” - Apple is after all a hardware company first.
A deeply technical one, yes, but that's not what drives their decision making.
Similarly browser developers care deeply if they break a website with the default settings, but they care less if cmd-+ breaks it because that's optional. If it became a mandatory accessibility feature somehow, now they have a problem.
With 2x scaling there only needs to be points and pixels which are both integers. Developers' existing code dealing with pixels can usually be reinterpreted to mean points, with only small changes needed to convert to and from pixels.
With the 2x-and-scale-down approach the scaling is mostly done by the OS and using integer scaling makes this maximally transparent. The devs usually only need to supply higher resolution artwork for icons etc. This means developers only need to support 1x and 2x, not a continuum between 1.0 and 3.0.
Then when you're on Wayland using fractional scaling, XWayland apps look very blurry all the time while Wayland-native apps look great.
If I want to use multiple monitors with different dpis, then I update it on every switch via echoing the above to `xrdb -merge -`, so newly launched apps inherit the dpi of the monitor they were started on.
Dirty solution, but results are pretty nice and without any blurriness.
Linux does not do that.
> It's strange that Wayland didn't do it this way from the start
It did (initially for integer scale factors, later also for fractional ones, though some Wayland-based environments did it earlier downstream).
It did (or at least Wayland compositors did).
> It did
It didn't.
I complained about this a few years ago on HN [0], and produced some screenshots [1] demonstrating the scaling artifacts resulting from fractional scaling (1.25).
This was before fractional scaling existed in the Wayland protocol, so I assume that if I try it again today with updated software I won't observe the issue (though I haven't tried yet).
In some of my posts from [0] I explain why it might not matter that much to most people, but essentially, modern font rendering already blurs text [2], so further blurring isn't that noticable.
[0] https://news.ycombinator.com/item?id=32021261
The real answer is just it's hard to bolt this on later, the UI toolkit needs to support it from the start
I know people mention 1 pixel lines (perfectly horizontal or vertical). Then they go multiply by 1.25 or whatever and go like: oh look 0.25 pixel is a lie therefore fractional scaling is fake (sway documentation mentions this to this day). This doesn't seem like it holds in practice other than from this very niche mental exercise. At sufficiently high resolution, which is the case for the display we are talking about, do you even want 1 pixel lines? It will be barely visible. I have this problem now on Linux. Further, if the line is draggable, the click zones becomes too small as well. You probably want something that is of some physical dimension which will probably take multiple pixels anyways. At that point you probably want some antialiasing that you won't be able to see anyways. Further, single pixel lines don't have to be exactly the color the program prescribed anyway. Most of the perfectly horizontal and vertical lines on my screen are all grey-ish. Having some AA artifacts will change its color slightly but don't think it will have material impact. If this is the case, then super resolution should work pretty well.
Then really what you want is something as follows:
1. Super-resolution scaling for most "desktop" applications.
2. Give the native resolution to some full screen applications (games, video playback), and possibly give the native resolution of a rectangle on screen to applications like video playback. This avoids rendering at a higher resolution then downsampling which can introduce information loss for these applications.
3. Now do this on a per-application basis, instead of per-session basis. No Linux DE implements this. KDE implements per-session which is not flexible enough. You have to do it for each application on launch.
It removes jaggies by using lots of little blurs (averaging)
I switched to high dpi displays under Linux back in the late 1990’s. It worked great, even with old toolkits like xaw and motif, and certainly with gtk/gnome/kde.
This makes perfect sense, since old unix workstations tended to have giant (for the time) frame buffers, and CRTs that were custom-built to match the video card capabilities.
Fractional scaling is strictly worse than the way X11 used to work. It was a dirty hack when Apple shipped it (they had to, because their third party software ecosystem didn’t understand dpi), but cloning the approach is just dumb.
There were multiple problems making it actually look good though - ranging from making things line up properly at fractional sizes (e.g. a "1 point line" becomes blurry at 1.25 scale), and that most applications use bitmap images and not vector graphics for their icons (and this includes the graphic primitives Apple used for the "lickable" button throughout the OS.
edit: I actually have an iMac G4 here so I took some screenshots since I couldn't find any online. Here is MacOS X 10.4 natively rendering windows at fractional sizes: https://kalleboo.com/linked/os_x_fractional_scaling/
IIRC later versions of OS X than this actually had vector graphics for buttons/window controls
Nobody wants to deal with vectors for everything. They're not performant enough (harder to GPU accelerate) and you couldn't do the skeumorphic UIs of the time with them. They have gotten more popular since, thanks to flat UIs and other platforms with free scaling.
QuickDraw in Carbon was included to allow for porting MacOS 9 apps, was always discouraged, and is long gone today (it was never supported in 64-bit).
Which pretty much means that it is using the same code paths and drivers that get used in Wayland.
At the moment only Windows handles that use case perfectly, not even macOS. Wayland comes second if the optional fractional scaling is implemented by the toolkit and the compositor. I am skeptical of the Linux desktop ecosystem to do correct thing there though. Both server-side decorations and fractional scaling being optional (i.e. requires runtime opt-in from compositor and the toolkit) are missteps for a desktop protocol. Both missing features are directly attributable to GNOME and their chokehold of GTK and other core libraries.
I have a mixed DPI setup and Windows falls flat (on latest Win 11), the jank when you move a application from one monitor to another as it tells the application to redraw is horrible, and even then it sometimes fails and I end up with a cut oversized application or the app crashes.
Where as on GNOME Wayland I can resize an application to cover all my monitors and it 'just works' in making them it the same physical size on all even when one monitor is 4K and the others 1440p. There's no jank, no redraw. Yes, there's sometimes artifacting from it downscaling as the app targets the highest DPI and gets downsized by the compositor, but that's okay to me.
Every GUI application on Windows runs an infinite event loop. In that loop you handle messages like [WM_INPUT](https://learn.microsoft.com/en-us/windows/win32/inputdev/wm-...). With Windows 8, Microsoft added a new message type: [WM_DPICHANGED](https://learn.microsoft.com/en-us/windows/win32/hidpi/wm-dpi...). To not break the existing applications with an unknown message, Windows requires the applications to opt-in. The application needs to report its DPI awareness using the function [SetProcessDpiAwareness](https://learn.microsoft.com/en-us/windows/win32/api/shellsca...). The setting of the DPI awareness state can also be done by attaching an XML manifest file to the .exe file.
With the message Windows not only provides the exact DPI to render the Window contents for the display but also the size of the window rectangle for the perfect pixel alignment and to prevent weird behavior while switching displays. After receiving the DPI, it is up to application to draw things at that DPI however it desires. The OS has no direct access to dictate how it is drawn but it does provide lots of helper libraries and functions for font rendering and for classic Windows UI elements.
If the application is using a Microsoft-implemented .NET UX library (WinForms, WPF or UWP), Microsoft has already implemented the redrawing functions. You only need to include manifest file into the .exe resources.
After all of this implementation, why does one get blurry apps? Because those applications don't opt in to handle WM_DPICHANGED. So, the only option that's left for Windows is to let the application to draw itself at the default DPI and then stretch its image. Windows will map the input messages to the default DPI pixel positions.
Microsoft does provide a half way between a fully DPI aware app and an unaware app, if the app uses the old Windows resource files to store the UI in the .exe resources. Since those apps are guaranteed to use Windows standard UI elements, Windows can intercept the drawing functions and at least draw the standard controls with the correct DPI. That's called "system aware". Since it is intercepting the application's way of drawing, it may result in weird UI bugs though.
Windows still breaks in several situations like different size and density monitors, but it's generally good enough.
Recent Gnome on Wayland does about as well as Windows.
And, of course, doing it "wrongly" as per what OS X and Gnome does works a lot better in practice.
I hadn't heard of WSLg, vcxsrv was the best I could do for free.
There is no mechanism for the user to specify a per-screen text DPI in X11.
(Or maybe there secretly is, and i should wait for the author to show us?)
However, there is no common way of handling different custom DPIs / scaling in the core Wayland protocol. Fractional scaling is implemented optionally by the client and the server and both need to opt-in.
And doing so actually using X not OpenGL.
This one of the major motivations as to why X11 guys decided Wayland was a good idea.
Because having your display server draw your application's output instead of your application drawing the output is a bad idea.
All circular UI elements are haram.
https://archive.org/details/xlibprogrammingm01adri/page/144/...
Xlib Programming Manual and Xlib Reference Manual, Section 6.1.4, pp 144:
>To be more precise, the filling and drawing versions of the rectangle routines don't draw even the same outline if given the same arguments.
>The routine that fills a rectangle draws an outline one pixel shorter in width and height than the routine that just draws the outline, as shown in Figure 6-2. It is easy to adjust the arguments for the rectangle calls so that one draws the outline and another fills a completely different set of interior pixels. Simply add 1 to x and y and subtract 1 from width and height. In the case of arcs, however, this is a much more difficult proposition (probably impossible in a portable fashion).
https://news.ycombinator.com/item?id=11484148
DonHopkins on April 12, 2016 | parent | context | favorite | on: NeWS – Network Extensible Window System
>There's no way X can do anti-aliasing, without a ground-up redesign. The rendering rules are very strictly defined in terms of which pixels get touched and how.
>There is a deep-down irreconcilable philosophical and mathematical difference between X11's discrete half-open pixel-oriented rendering model, and PostScript's continuous stencil/paint Porter/Duff imaging model.
>X11 graphics round differently when filling and stroking, define strokes in terms of square pixels instead of fills with arbitrary coordinate transformations, and is all about "half open" pixels with gravity to the right and down, not the pixel coverage of geometric region, which is how anti-aliasing is defined.
>X11 is rasterops on wheels. It turned out that not many application developers enjoyed thinking about pixels and coordinates the X11 way, displays don't always have square pixels, the hardware (cough Microvax framebuffer) that supports rasterops efficiently is long obsolete, rendering was precisely defined in a way that didn't allow any wiggle room for hardware optimizations, and developers would rather use higher level stencil/paint and scalable graphics, now that computers are fast enough to support it.
>I tried describing the problem in the Unix-Haters X-Windows Disaster chapter [1]:
>A task as simple as filing and stroking shapes is quite complicated because of X's bizarre pixel-oriented imaging rules. When you fill a 10x10 square with XFillRectangle, it fills the 100 pixels you expect. But you get extra "bonus pixels" when you pass the same arguments to XDrawRectangle, because it actually draws an 11x11 square, hanging out one pixel below and to the right!!! If you find this hard to believe, look it up in the X manual yourself: Volume 1, Section 6.1.4. The manual patronizingly explains how easy it is to add 1 to the x and y position of the filled rectangle, while subtracting 1 from the width and height to compensate, so it fits neatly inside the outline. Then it points out that "in the case of arcs, however, this is a much more difficult proposition (probably impossible in a portable fashion)." This means that portably filling and stroking an arbitrarily scaled arc without overlapping or leaving gaps is an intractable problem when using the X Window System. Think about that. You can't even draw a proper rectangle with a thick outline, since the line width is specified in unscaled pixel units, so if your display has rectangular pixels, the vertical and horizontal lines will have different thicknesses even though you scaled the rectangle corner coordinates to compensate for the aspect ratio.
[1] The X-Windows Disaster: http://www.art.net/~hopkins/Don/unix-haters/x-windows/disast...
Finally (and for a long time now) it's an independent library, no longer tied into the X server and Xr extension, and there are a lot of wrappers for it, browser and GTK and many other frameworks use it, and it has lots of nice bindings to languages, like pycairo.
Jim Gettys, one of Cairo's authors and an original X-Windows architect, also worked on the OLPC project and its Sugar user interface framework (designed for making educational apps for kids), which used Cairo via GTK/PyGTK/PyCairo/Pango/Poppler.
Jim's big cause is that he champions eradicating "Bufferbloat":
https://en.wikipedia.org/wiki/Bufferbloat
https://gettys.wordpress.com/2010/12/03/introducing-the-crim...
I had a great time using it for the Micropolis (open source SimCity) tile rendering engine, which I wrote in C++, then wrapped with David Beazly's SWIG tool as a Python extension, so Python PyGTK apps could pass their existing Cairo rendering context into C++ and it could render at high speed without the Python interpreter in the way, on either windows or bitmaps.
https://en.wikipedia.org/wiki/SWIG
The TileEngine is a C++ python module wrapped with SWIG, that uses the Cairo library and knows how to accept a PyGTK Cairo context as a parameter to draw on directly via the api -- Python just passes pointers back and forth between PyGTK by wrangling and unwrangling wrappers around the Cairo context pointer:
TileEngine: https://github.com/SimHacker/micropolis/tree/master/Micropol...
tileengine.h: https://github.com/SimHacker/micropolis/blob/master/Micropol...
tileengine.cpp: https://github.com/SimHacker/micropolis/blob/master/Micropol...
pycairo.i: https://github.com/SimHacker/micropolis/blob/master/Micropol...
tileengine-swig-python.i: https://github.com/SimHacker/micropolis/blob/master/Micropol...
tileengine.i: https://github.com/SimHacker/micropolis/blob/master/Micropol...
Then you can call the tile engine from Python, and build GTK widgets and apps on top of it like so, and it all runs silky smooth, with pixel perfect tiling and scaling, so you can zoom into the SimCity map, and Python can efficiently draw sprites and overlays on it like Godzilla, tornados, trains, airplanes, helicopters, the cursor, etc:
tiledrawingarea.py: https://github.com/SimHacker/micropolis/blob/master/Micropol...
tilewindow.py: https://github.com/SimHacker/micropolis/blob/master/Micropol...
tiletool.py: https://github.com/SimHacker/micropolis/blob/master/Micropol...
I've written about Cairo on HN before, sharing some email with Jim about it:
https://news.ycombinator.com/item?id=20379336
DonHopkins on July 8, 2019 | parent | context | favorite | on: The death watch for the X Window System has probab...
Cairo wasn't the library behind the X11 drawing API, it was originally the Xr rendering extension, that was an alternative to the original X11 drawing API.
https://en.wikipedia.org/wiki/Cairo_(graphics)
>The name Cairo derives from the original name Xr, interpreted as the Greek letters chi and rho.
You're right, it doesn't actually make sense to put your drawing functions in the display server any more (at least in the case of X11, which doesn't have an extension language to drive the drawing functions -- but it did make sense for NeWS which also used PostScript as an extension language as well as a drawing API).
So Cairo rose above X11 and became its own independent library, so it could be useful to clients and toolkits on any window system or hardware.
https://www.osnews.com/story/3602/xr-x11-cross-device-render...
https://web.archive.org/web/20030805030147/http://xr.xwin.or...
https://keithp.com/~keithp/talks/xarch_ols2004/xarch-ols2004...
Here's some email discussion with Jim Gettys about where Cairo came from:
From: Jim Gettys <jg@laptop.org> Date: Jan 9, 2007, 11:04 PM
The day I thought X was dead was the day I installed CDE on my Alpha.
It was years later I realized the young turks were ignoring the disaster perpetrated by the UNIX vendors in the name of "standardization"; since then, Keith Packard and I have tried to pay for our design mistakes in X by things like the new font model, X Render extension, Composite, and Cairo, while putting stakes in the heart of disasters like XIE, LBX, PEX, the old X core font model, and similar design by committee mistakes (though the broken core 2D graphics and font stuff must be considered "original sin" committed by people who didn't know any better at the time).
So we've mostly succeeded at dragging the old whale off the beach and getting it to live again.
From: Don Hopkins <dhopkins@donhopkins.com> Date: Wed, Jan 17, 2007, 10:50 PM
Cairo looks wonderful! I'm looking forward to using it from Python, which should be lots of fun.
A lot of that old X11 stuff was thrown in by big companies to shill existing products (like using PEX to sell 3d graphics hardware, by drawing rotating 3-d cubes in an attempt to hypnotize people).
Remember UIL? I heard that was written by the VMS trolls at DEC, who naturally designed it with an 132 column line length limitation and no pre-processor of course. The word on the street was that DEC threw down the gauntlet and insisted on UIL being included in the standard, even though the rest of the committee hated it for sucking so bad. But DEC threatened to hold their breath until they got their way.
And there were a lot of weird dynamics around commercial extensions like Display PostScript, which (as I remember it) was used as an excuse for not fixing the font problems a lot earlier: "If you want to do readable text, then you should be using Display PostScript."
The problem was that Linux doesn't have a vendor to pay the Display PostScript licensing fee to Adobe, so Linux drove a lot of "urban renewal" of problems that had been sidelined by the big blundering companies originally involved with X.
>So we've mostly succeeded at dragging the old whale off the beach and getting it to live again.
Hey, that's a lot better than dynamiting the whale, which seemed like a such good idea at the time! (Oh the humanity!)
https://www.youtube.com/watch?v=AtVSzU20ZGk
From: Jim Gettys <jg@laptop.org> Date: Jan 17, 2007, 11:41 PM
> Cairo looks wonderful! I'm looking forward to using it from Python, which should be lots of fun.
Yup. Cairo is really good stuff. This time we had the benefit of Lyle Ramshaw to get us unstuck. Would that I'd known Lyle in 1986; but it was too late 3 years later when I got to know him.
https://cairographics.org/bibliography/
Here's some more of the discussion with Jim about Cairo and X-Windows:
https://news.ycombinator.com/item?id=7727953
>In 2007, I apologized to Jim Gettys for the tone of the X-Windows Disaster chapter I wrote for the book, to make sure he had no hard feelings and forgave me for my vitriolic rants and cheap shots of criticism:
http://www.donhopkins.com/home/catalog/unix-haters/x-windows...
DH>> I hope you founds it more entertaining than offensive!
JG> At the time, I remember it hurting; now I find it entertaining. Time cures such things. And Motif was definitely a vendor perpetrated unmitigated disaster: the worst of it was that it "succeeded" in unifying the UNIX gui, which means it succeeded at stopping all reasonable work on gui's on UNIX until the young Linux turks took over.
JG> And by '93 or so, the UNIX vendors actively wanted no change, as they had given up on the desktop and any innovation would cost them money.
DH>> The whole "Unix-Haters Handbook" thing was intended to shake up the status quo and inspire people to improve the situation instead of blindly accepting the received view. (And that's what's finally happened, although I can't take the credit, because it largely belongs to Linux -- and now that's the OLPC's mission!)
DH>> The unix-haters mailing list was a spin-off of its-lovers@mit-ai: in order to qualify for the mailing list you had to post a truly vitriolic no-holds-barred eyeball-popping flame.
DH>> I hope that helps to explain the tone of "The X-Windows Disaster", which I wrote to blow off steam while I was developing the X11 version of SimCity.
JG> Yup. I won't hold it against you ;-). Though any operating system with ddt as its shell is downright user hostile...
JG>>> The day I thought X was dead was the day I installed CDE on my Alpha. [...]
And more about Pango, the text rendering library on top of Cairo, the OLPC's Sugar user interface, which was built on PyGTK, and the OLPC Read book reader app that used the Cairo-based Poppler PDF rendering library:
https://en.wikipedia.org/wiki/Poppler_(software)
https://news.ycombinator.com/item?id=16852148
>I worked on making the Read activity usable in book mode (keyboard folded away, but gamepad buttons usable), and I vaguely recall putting in an ioctl to put the CPU to sleep after you turned a page, but I'm not sure if my changes made it in. [...]
>Sugar had a long way to go, and wasn't very well documented. They were trying to do too much from scratch, and choose a technically good but not winning platform. It was trying to be far too revolutionary, but at the same time building on top of layers and layers of legacy stack (X11, GTK, GTK Objects, PyGTK bindings, Python, etc).
>Sugar was written in Python and built on top of PyGTK, which necessitated buying into a lot of "stuff". On top of that, it used other Python modules and GTK bindings like Cairo for imaging, Pango for text, etc. All great industrial strength stuff. But then it had its own higher level Hippo canvas and user interface stuff on top of that, which never really went anywhere (for good reason: it was complex because it was written for PyGTK in a misshapen mish-mash of Python and C with the GTK object system, instead of pure simple Python code -- hardly what Alan Kay thinks of as "object oriented programming"). And for browser based stuff there were the Python bindings to xulrunner, which just made you yearn for pure JavaScript without all the layers of adaptive middle-ware between incompatible object systems.
>The problem is that Sugar missed the JavaScript/Web Browser boat (by arriving a bit too early, or actually just not having enough situational awareness). Sugar should have been written in JavaScript and run in any browser (or in an Electron-like shell such as xulrunner). Then it would be like a Chromebook, and it would benefit from the enormous amount of energy being put into the JavaScript/HTML platform. Python and GTK just hasn't had that much lovin'.
>When I ported the multi player TCL/Tk/X11 version of SimCity to the OLPC, I ripped out the multi player support because it was too low level and required granting full permission to your X server to other players. I intended to eventually reimplement it on top of the Sugar grid networking and multi user activity stuff, but that never materialized, and it would have been a completely different architecture than one X11 client connecting to multiple X11 servers.
>Then I made a simple shell script based wrapper around the TCL/Tk application, to start and stop it from the Sugar menus. It wasn't any more integrated with Sugar than that. Of course the long term plan was to rewrite it from the ground up so it was scriptable in Python, and took advantage of all the fancy Sugar stuff.
>But since the Sugar stuff wasn't ready yet, I spent my time ripping out TCL/Tk, translating the C code to C++, wrapping it with SWIG and plugging it into Python, then implementing a pure PyGTK/Cairo user interface, without any Sugar stuff, which would at least be a small step in the direction of supporting Sugar, and big step in the direction of supporting any other platform (like the web).
[...]
The author did exactly this:
> Even better, I didn’t mention that I wasn’t actually running this program on my laptop. It was running on my router in another room, but everything worked as if
In the article, the author uses OpenGL to make sure that they're interacting with the screen at a "lower level" than plenty of apps that were written against X. But that's the rub, I think the author neatly sidestepped by mostly using stuff that's not in "vanilla" X11. In fact, the "standard" API of X via Xlib seems to only expose functions for working in raw pixels and raw pixel coordinates without any kind of scaling awareness. See XDrawLine as an example: https://www.x.org/releases/current/doc/man/man3/XDrawLine.3....
It seems to me that the RandR extension through xrandr is the thing providing the scaling info, not X11 itself. You can see that because the author calls `XRRGetScreenResourcesCurrent()` a function that's not a part of vanilla X11 (see list of X library functions here as example: https://www.x.org/releases/current/doc/man/man3/ )
Now, xrandr has been a thing since the early 2000s hence why xrandr is ubiquitous, but due to it's nature as an extension and plenty of existing code sitting around that's totally scale-unaware, I can see why folks believe X11 is scale unaware.
xrandr --output eDP --scale 0.8x0.8
For years and years, and I never really noticed any problems with it. Guess I don't run any "bad" scale-unaware programs? Or maybe I just never noticed(?)At least from my perspective, for all practical purposes it seems to "just work".
--output eDP
This parameter specifies which display to scale, so only the built-in display will be scaled. Running xrandr without any parameters returns all available outputs, as well as the resolutions the currently connected displays support.I still think X11 forwarding over SSH is a super cool and unsung/undersung feature. I know there are plenty of good reasons we don't really "do it these days" but I have had some good experiences where running the UI of a server app locally was useful. (Okay, it was more fun than useful, but it was useful.)
It felt like a prototype feature that never became production-ready for that reason alone. Then there's all the security concerns that solidify that.
But yes, it does work reasonably well, and it is actually really cool. I just wish it were... better.
For example, both versions of Emacs would download the lengths of each line on the screen when you started a selection, so you could drag and select the text and animation the selection overlay without any network traffic at all, without sending mouse move events over the network, only sending messages when you autoscrolled or released the button.
http://www.bitsavers.org/pdf/sun/NeWS/800-5543-10_The_NeWS_T... document page 2, pdf page 36:
>Thin wire
>TNT programs perform well over low bandwidth client-server connections such as telephone lines or overloaded networks because the OPEN LOOK components live in the window server and interact with the user without involving the client program at all.
>Application programmers can take advantage of the programmable server in this way as well. For example, you can download user-interaction code that animates some operation.
UniPress Emacs NeWS Driver:
https://github.com/SimHacker/NeMACS/blob/b5e34228045d544fcb7...
Selection support with local feedback:
https://github.com/SimHacker/NeMACS/blob/b5e34228045d544fcb7...
Gnu Emacs 18 NeWS Driver (search for LocalSelectionStart):
https://donhopkins.com/home/code/emacs18/src/tnt.ps
https://news.ycombinator.com/item?id=26113192
DonHopkins on Feb 12, 2021 | parent | context | favorite | on: Interview with Bill Joy (1984)
>Bill was probably referring to what RMS calls "Evil Software Hoarder Emacs" aka "UniPress Emacs", which was the commercially supported version of James Gosling's Unix Emacs (aka Gosling Emacs / Gosmacs / UniPress Emacs / Unimacs) sold by UniPress Software, and it actually cost a thousand or so for a source license (but I don't remember how much a binary license was). Sun had the source installed on their file servers while Gosling was working there, which was probably how Bill Joy had access to it, although it was likely just a free courtesy license, so Gosling didn't have to pay to license his own code back from UniPress to use at Sun. https://en.wikipedia.org/wiki/Gosling_Emacs
>I worked at UniPress on the Emacs display driver for the NeWS window system (the PostScript based window system that James Gosling also wrote), with Mike "Emacs Hacker Boss" Gallaher, who was charge of Emacs development at UniPress. One day during the 80's Mike and I were wandering around an East coast science fiction convention, and ran into RMS, who's a regular fixture at such events.
>Mike said: "Hello, Richard. I heard a rumor that your house burned down. That's terrible! Is it true?"
>RMS replied right back: "Yes, it did. But where you work, you probably heard about it in advance."
>Everybody laughed. It was a joke! Nobody's feelings were hurt. He's a funny guy, quick on his feet!
In the late 80's, if you had a fast LAN and not a lot of memory and disk (like a 4 meg "dickless" Sun 3/50), it actually was more efficient to run X11 Emacs and even the X11 window manager itself over the LAN on another workstation than on your own, because then you didn't suffer from frequent context switches and paging every keystroke and mouse movement and click.
The X11 server and Emacs and WM didn't need to context switch to simply send messages over the network and paint the screen if you ran emacs and the WM remotely, so Emacs and the WM weren't constantly fighting with the X11 server for memory and CPU. Context switches were really expensive on a 68k workstation, and the way X11 is designed, especially with its outboard window manager, context switching from ping-ponging messages back and forth and back and forth and back and forth and back and forth between X11 and the WM and X11 and Emacs every keystroke or mouse movement or click or window event KILLED performance and caused huge amounts of virtual memory thrashing and costly context switching.
Of course NeWS eliminated all that nonsense gatling gun network ping-ponging and context switching, which was the whole point of its design.
That's the same reason using client-side Google Maps via AJAX of 20 years ago was so much better than the server-side Xerox PARC Map Viewer via http of 32 years ago.
https://en.wikipedia.org/wiki/Xerox_PARC_Map_Viewer
Outboard X11 ICCCM window managers are the worst possible most inefficient way you could ever possibly design a window manager, and that's not even touching on their extreme complexity and interoperability problems. It's the one program you NEED to be running in the same context as the window system to synchronously and seamlessly handle events without dropping them on the floor and deadlocking (google "X11 server grab" if you don't get what this means), but instead X11 brutally slices the server and window manager apart like King Solomon following through with his child-sharing strategy.
https://tronche.com/gui/x/xlib/window-and-session-manager/XG...
While NeWS not only runs the window manager efficiently in the server without any context switching or network overhead, but it also lets you easily plug in your own customized window frames (with tabs and pie menus), implement fancy features like rooms and virtual scrolling desktops, and all kinds of cool stuff! At Sun were even managing X11 windows with a NeWS ICCCM window manager written in PostScript, wrapping tabbed windows with pie menus around your X-Windows!
https://donhopkins.com/home/archive/NeWS/owm.ps.txt
https://donhopkins.com/home/archive/NeWS/win/xwm.ps
https://www.donhopkins.com/home/catalog/unix-haters/x-window...
If so, good for you, but there are plenty of people who do so.
Hell, in X11 it even sucks if you use the screens one at a time, eg if you turn plug your laptop into the monitor and turn off the laptop screen, and then later unplug and continue your work on the laptop, the scaling will be off on at least one device.
Why can't you just display a blurry rectangle until the mouse cursor goes to the other screen and then you switch the primary resolution from one screen to the other?
I feel like trying to be extremely clever to handle this particular problem would lead to a solution that handles far more common situations much worse for everybody...
In that case, the approach taken in macOS is nicer - it just hides the half of the window where the pointer wasn't when the window was dragged. While dragging it does a resample of the bitmap to the screen where that part is shown.
OTOH it is questionable if this is really all that important. Most of the time if a window spans more than one screen it's temporary because you are just moving the window from one screen to another.
Well, ok, on Windows if you keep to certain standard elements of Windows API and only use standard widgets you could get close to transparency.
No one with a good grasp of the space ever claimed that it wasn't possible on X11 to call into APIs to retrieve physical display size and map that to how many pixels to render. This has been possible for decades, and while not completely trivial is not the hard part about doing good UI scaling.
Doing good UI scaling requires infrastructure for dynamic scaling changes, for different scale factors per display within the same scene, for guaranteeing crisp hairlines at any scale factor, and so on and so forth.
Many of these problems could have been solved in X11 with additional effort, and some even made it to partial solutions available. The community simply chose to put its energy into bringing it all together in the Wayland stack instead.
KDE developer wrote recently:
> X11 isn’t able to perform up to the standards of what people expect today with respect to .., 10 bits-per-color monitors,.. multi-monitor setups (especially with mixed DPIs or refresh rates),... [1]
Multi-monitor setups are working since 20+ years. 10 bits are also supported (otherwise how would the PRO versions of graphic cards support this feature).
> chose to put its energy into bringing it all together in
I cannot recall, was there any paper analyzing why working and almost working X11 features do not fit, few additional X11 extensions cannot be proposed anymore and another solution from scratch is inevitable. What is a significant difference of a X11 and a wayland protocol extension.
[1] https://pointieststick.com/2025/06/21/about-plasmas-x11-sess...
That's quite similar to how I chose to phrase is, and comes down to where the community chose to spend the effort to solve all the integration issues to make it so.
Did the community decide that after a long soul-seeking process that ended with a conclusion that things were impossible to make happen in X11, and does that paper you invoke exist? No, not really. Conversations like this certainly did take place, but I would say more in informal settings, e.g. discussions on lists and at places like the X.org conference. Plenty of "Does it make sense to that in X11 still or do we start over?" chatter in both back in the day.
If I recall right, the most serious effort was a couple of people taking a few weeks to entertain a "Could we fix this in an X12 and how much would that break?" scenario. Digging up the old fdo wiki pages on that one would for sure be interesting for the history books.
The most close analogue I can think of that most in the HN audience are familiar with is probably the Python 2->3 transition and decision to clean thing up at the expense of backward compat. To this day, you will of course find folks arguing emotionally on either side of the Python argument as well.
For the most part, the story of how this happened is a bit simpler: It used to be that the most used X11 display server was a huge monolith that did many things the kernel would not, all the way to crazy things like managing PCI bus access in user space.
This slowly changed over the years, with strengthening kernel infra like DRM, the appearance of Kernel Mode Setting, with the evolution of libraries like Mesa. Suddenly implementing a display server became a much simpler affair that mostly could call into a bunch of stuff elsewhere.
This created an opening for a new smaller project fully focused on the wire protocol and protocol semantics part, throwing away a lot of old baggage and code. Someone took the time to do that and demonstrate how it looks like, other people liked what they saw and Wayland was born.
This also means: Plenty of the useful code of the X11 era actually still exists. One of the biggest myths is that Wayland somehow started over from scratch. A lot of the aforementioned stuff that over the years migrated from the X11 server to e.g. the kernel is obviously still what makes things work now, and libraries such as libinput, xkbcommon that nearly every Wayland display server implementation uses are likewise factored out of the X11 stack.
Of course, newer KWin versions also add many odd issues with X11 so I'm sure they will bug equally buggy soon enough and users can finally switch without such concerns.
> The most close analogue I can think of that most in the HN audience are familiar with is probably the Python 2->3 transition and decision to clean thing up at the expense of backward compat. To this day, you will of course find folks arguing emotionally on either side of the Python argument as well.
Yes, that was and still is a huge clusterfuck and prime example of things not to do - precisely because it is full of completely arbitrary compatibility breaks.
X has been on life support for decades now, with new capabilities just bolted on without a care in the world. But the actual system works quite inconsistently, and some things will, presumably, never work.
I keep hearing this.
My preferred desktop is Unity. I also like the ROX desktop, and Openbox, and I used to like EDE and XPde. I find CDE interesting to play with and want to try the Maxx Interactive Desktop, a version of SGI's IRIX desktop. LXDE was clunky but it worked for me, but LXQt isn't: its vertical taskbar has been broken since before version 1.0.
Not one of those working environments can use Wayland, and all of them are unlikely ever to.
I detest KDE, which I find horribly overcluttered and messily inconsistent, and I also detest GNOME >=3 which feels like a phone UI on a desktop: it's missing almost every option I want. They are two extremes, one overly complicated, one overly minimal. I do not use the shell much so I have no interest in tiling environments.
There's not a single environment I find bearable on Wayland today. Maybe, by 2027, there will be a usable Xfce.
In other words, in terms that matter to me personally, Wayland is not better in any way whatsoever, and nothing I use works.
I say this not to be confrontational, but merely to point out while one person can say "but everything works!" the claim can be true for them while not generalising at all.
And doing this for everything in the entire ecosystem of ancient GUI libraries? And dealing with the litany of different ways folks have done icons, text, and even just drawing lines onto the screen? That's where you run into a lot of trouble.
The thing X11 really is missing (at least most importantly) is DPI virtualization. UI scaling isn't a feature most display servers implement because most display servers don't implement the actual UI bits. The lack of DPI virtualization is a problem though, because it leaves windows on their own to figure out how to logically scale input and output coordinates. Worse, they have to do it per monitor, and can't do anything about the fact that part of the window will look wrong if it overlaps two displays with different scaling. If anything doesn't do this or does it slightly differently, it will look wrong, and the user has little recourse beyond searching for environment variables or X properties that might make it work.
Explaining all of that is harder than saying that X11 has poor display scaling support. Saying it "doesn't support UI/display scaling" is kind of a misnomer though; that's not exactly the problem.
It's silly that people keep complaining about this. It's a very minor effect, and one that can be solved in principle only by moving to pure vector rendering for everything. Generally speaking, a window will only ever span a single screen. It's convenient to be able to drag a window to a separate monitor, but having that kind of overlap as a permanent feature of one's workflow is just crazy.
> The thing X11 really is missing (at least most importantly) is DPI virtualization.
Shouldn't that kind of DPI virtualization be a concern for toolkits rather than the X server or protocol? As long as X is getting accurate DPI information from the hardware and reporting that to clients, what else is needed?
If you have DPI virtualization, a very sufficient solution already exists: pick a reasonable scale factor for the underlying buffer and use it, then resample for any outputs that don't match. This is what happens in most Wayland compositors. Exactly what you pick isn't too important. You could pick whichever output overlaps the most with the window, or the output that has the highest scale factor, or some other criteria. It will not result in perfect pixels everywhere, but it is perfectly sufficient to clean up the visual artifacts.
Another solution would be to simply only present the surface on whatever output it primarily overlaps with. MacOS does this and it's seemingly sufficient. Unfortunately, as far as I understand, this isn't really trivial to do in X11 for the same reasons why DPI virtualization isn't trivial: whether you render it or not, the window is still in that region and will still receive input there.
> Generally speaking, a window will only ever span a single screen. It's convenient to be able to drag a window to a separate monitor, but having that kind of overlap as a permanent feature of one's workflow is just crazy.
The issue with the overlap isn't that people routinely need this; if they did, macOS or Windows would also need a more complete solution. In reality though, it's just a very janky visual glitch that isn't really too consequential for your actual workflow. Still, it really can make moving windows across outputs super janky, especially since in practice different applications do sometimes choose different behaviors. (e.g. will your toolkit choose to resize the window so it has the same logical size? will this impact the window dragging operation?)
So really, the main benefit of solving this particular edge case is just to make the UX of window management better.
While UX and visual jank concerns are below concerns about functionality, I still think they have non-zero (and sometimes non-low) importance. Laptop users expect to be able to dock and manage windows effectively regardless of whether the monitors they are using have the same ideal scale factor as the laptop's internal panel; the behavior should be clean and effective and legacy apps should ideally at least appear correct even if blurry. Being able to do DPI virtualization solves the whole set of problems very cleanly. MacOS is doing this right, Windows is finally doing this right, Wayland is doing this right, X11 still can't yet. (It's not physically impossible, but it would require quite a lot of work since it would require modifying everything that handles coordinate spaces I believe.)
> Shouldn't that kind of DPI virtualization be a concern for toolkits rather than the X server or protocol? As long as X is getting accurate DPI information from the hardware and reporting that to clients, what else is needed?
Accurate DPI information is insufficient as users may want to scale differently anyways, either due to preference, higher viewing distance, or disability. So that already isn't enough.
That said, the other issue is that there already exists applications that don't do perfect per monitor scaling, and there doesn't exist a single standard way to have the per-monitor scaling preferences propagated in X11. It's not even necessarily a solved problem among the latest versions of all of the toolkits, since it at minimum requires support for desktop environment settings daemons and etc.
Which is fine. There's already a standardized property in XSETTINGS to use on X11 to advertise the user's scaling preference. For Wayland they decided to include this into the protocol, so it can be per-output and/or per-window (though the per-window fractional scaling stuff is an optional extension, sigh).
There's no reason why we couldn't do something similarly on X11, via xrandr output properties and X window properties. But it's more fun to abandon things and invent new ones than fix the things you have, so here we are.
The same folks who are working on Wayland today did a lot of work to get X.org to where it is now. They could do more, but the writing was on the wall.
In the past, the problem with UI toolkits doing proportional sizing was because they used bitmaps for UI elements. Since newer versions of Qt and Gtk 4 render programmatically, they can do it the right way. Windows mostly does this, too, even with win32 as long as you're using the newer themes. MacOS is the only one that has assets prerendered at integer factors everywhere and needs to perform framebuffer scaling to change sizes. But Apple doesn't care because they don't want you using third-party monitors anyway.
Edit: I'm not sure about Apple's new theme. Maybe this is their transition point away from fixed asset sizes.
Win32 controls have always been DPI independent, as far back as Windows 95. There is DPI choice UX as part of the "advanced" display settings.
In practice Windows and macOS both do bitmap scaling when necessary. macOS scales the whole frame buffer, Windows scales windows individually.
Can you do an entire windowing pipeline where it's vectors all the way until the actual compositing? Well, sure! We were kind of close in the pre-compositing era sometimes. Is it worth it to do so? I don't think so for now. Most desktop displays are made up of standard-ish pixels so buffers full of pixels makes a very good primitive. So making the surfaces themselves out of pixels seems like a fine approach, and the scaling problem is relatively easy to solve if you start with a clean slate. The fact that it can handle the "window splitting across outputs" case slightly better is not a particularly strong draw; I don't believe most users actually want to use windows split across outputs, it's just better UX if things at least appear correct. Same thing for legacy apps, really: if you run an old app that doesn't support scaling it's still better for it to work and appear blurry than to be tiny and unusable.
What to make of this. Well, the desktop platform hasn't moved so fast; ten years of progress has become little more than superficial at this point. So I think we can expect with relatively minor concessions that barring an unforeseen change, desktops we use 10 to 20 years from now probably won't be that different from what we have today; what we have today isn't even that different from what we already had 20 years ago as it is. And you can see that in people's attitudes; why fix what isn't broken? That's the sentiment of people who believe in an X11 future. Of course in practice, there's nothing particularly wrong with trying to keep bashing X11 into modernity; with much pain they definitely managed to take X.org and make it shockingly good. Ironically, if some of the same people working on Wayland today had put less work into keeping X.org working well, the case for Wayland would be much stronger by now. Still, I really feel like roughly nobody actually wants to sit there and try to wedge HDR or DPI virtualization into X11, and retooling X11 without regard for backwards compatibility is somewhat silly since if you're going to break old apps you may as well just start fresh. Wayland has always had tons of problems yet I always bet on it as the most likely option simply because it just makes the most sense to me and I don't see any showstoppers that seem like they would be insurmountable. Lo and behold, it sure seems to me that the issues remaining for Wayland adoption have started to become more and more minor. KDE maintains a nice list of more serious drawbacks. It used to be a whole hell of a lot larger!
https://community.kde.org/Plasma/Wayland_Known_Significant_I...
The underlying issue with this is the use of fixed-layout interfaces in Win32. If you tweak the layout dynamically to be "responsive" to how the text wraps, this becomes an absolute non-issue. It could also be done with reasonable efficiency at the time; early versions of KDE/Qt already did this out of the box on the same hardware as Win9x.
That's a shitty "solution" that doesn't even solve the issue - the result will still look bad on at least one monitor and you're wasting energy pushing more pixels than needed on the other one.
Users think they want a lot of things they don't really need. Do we really want to hand users that loaded gun so that they can choose incorrectly where to fire?
For example, if I'm using KDE on a TV, which by the way I am (with Bazzite to be exact, works great) then I want to set the scale factor in KDE higher because I'm going to be standing further away. This is not optional; the UI is completely unreadable if you just let it use the physical dimensions to scale. There's nothing you can do. A preference is necessary to handle this case.
You could argue that this is a PEBKEC ignoring the fact that desktop environments care about this use case, but what you can't argue about is this: it's an accessibility issue. Having a magnifier tool is very important for people who have vision issues, but it is not enough. Users with vision problems need to be able to scale the UI. And yes, the UI, not text size. Changing the text size helps for text, but not for things like icons.
If you want to be able to sell Linux on devices in the EU, then having sufficient accessibility features is not optional.
> Worse, they have to do it per monitor, and can't do anything about the fact that part of the window will look wrong if it overlaps two displays with different scaling.
That is not a real issue. Certainly not anything worth breaking backwards compatibility and even if you care about cosmetic issues like this you can fix them with extensions.
You can fix this with extensions... Kind of, anyway. It's really not that trivial. Like if you do DPI virtualization, you need it to take effect across literally everything. Like for example, some applications in X11 will read xrandr information for window placement. To properly handle DPI-unaware applications, you need to be able to present virtualized coordinates to some applications. This is actually one of the easier problems to solve, it goes downhill from there.
$ ssh <remote> glxgears
runs fine!A couple of years ago I could not get anything OpenGL working over ssh, no matter how hard I tried. Ever since I just accepted that as fact. But I tested it now and it just works!
I was doing this circa 2005 with an OpenGL program running on a Linux box in a server closet and a Windows machine running on my desk running some X11 server. Before I did that, I did research into the remote-draw capability of both X11 and OpenGL and came to the conclusion that what I eventually ended up doing would work just fine.
Traditionally it's used to launch a full-screen application, usually a game, but you can launch your window manager through it, if you want your desktop session to use a custom resolution with custom scaling and/or letterboxing.
Skill issue. You probably held your keyboard wrong or something. Simple xrandr commands work fine like they have for decades. (Of course if you've moved to Wayland then who knows).
CUDA, and ray tracing performance.
I get it, I've heard the same from the sway maintainer, maintaining their crusade against Nvidia for a couple years now (the --unsupported-gpu flag used to be something like --my-next-gpu-will-be-amd), with some good arguments about how anti-foss Nvidia is. And if that's how sway wants to be then that's how sway's gonna be.
But with steamdeck and now steam os showing better performance than windows on handheld gaming devices, and with people getting more and more annoyed with Microsoft's bullshit in windows (like unwanted AI integrations), I think all of us Linux enthusiasts have a really good opportunity here to pull a huge influx of people in, if we're willing to budge just a bit on some of our dogmatic crusades against companies like Nvidia.
There's more Nvidia cards in the wild than AMD, according to steam surveys by a huge margin. If we can get Wayland display managers working well on Nvidia thenthat's a lot of new people we can bring into the fold!
Going to OpenGL is a nice tactic, since OpenGL doesn't give a flip about screen coördinates anyway.
I miss NeWS - it actually brought a number of great capabilities to a window system - none of which, AFAIK, are offered by Wayland.
If I was to play Dark Souls 3 and/or Elden ring on Linux without tearfree. There is significant screen tearing and the game feels very choppy when playing.
To enable TearFree on Xorg. You typically make a new configuration file that sits in /etc/X11/xorg.conf.d/ and append to the X configuration
https://wiki.archlinux.org/title/AMDGPU#Tear_free_rendering
There are downside to this, but I would only imagine they are problems on older GPUs.
https://unix.stackexchange.com/questions/518362/whats-the-do...
I've never noticed these downsides personally and everything seems to work great.
I don't like Wayland. It still seems very buggy and I am running Debian Trixie and would prefer to keep using X11.
But IME Wayland does have higher performance on older hardware it seems than X. My old laptop could barely play Youtube with X11 (it is the video itself not YouTube being a resource hog, I checked), Wayland performance is much better.
Did you check by downloading the video and playing it with a good standalone video player like mplayer, vlc, or mpv? If you didn't, then you didn't disentangle the web browser from the video playback.
The only thing that was different was Wayland vs X11. Same browser, same browser settings, same OS and same plugins.
Neat. Did you test outside of the browser? Based on your report, it sounds like you didn't. As you must know, the renderers in web browsers are very, very complex. I suggest you test with a standalone video player before you go blaming the underlying windowing system for performance issues.
My instance of Firefox has been configured to use only software rendering. This YouTube video <https://www.youtube.com/watch?v=tO01J-M3g0U> runs fine in both Firefox and mpv. This YouTube video <https://www.youtube.com/watch?v=WjoplqS1u18> drops many frames when played at 8K in Firefox (making it choppy and sluggish), but zero when played at 8K in mpv.
There are a great many variables in play when playing something through a web browser. That's why I suggested you re-run the test without the web browser.
Speaking of "a great many variables"...
> The machine went to sluggish and painful to use, to being reasonably decent.
Then something seems to be wrong with your Xorg config. Whether it's the drivers, the configuration of the system, or both, I don't have enough information to know. Are you running Xorg on an ARM Apple machine? That's apparently known to work very, very poorly because Apple's graphics hardware is "special". Are you running an un-accelerated Xorg video driver (like the VESA or fbdev drivers) or are perhaps using the nouveau driver on Nvidia hardware? The former would certainly be very slow. The latter is known to work fine for some folks and work really, really poorly for others.
> I don't appreciate your snark.
It's not snark. It's an earnest request to reduce the number of moving parts to make troubleshooting easier. And (as we've discovered from further testimony) the web browser wasn't even involved in the slowness... the problem is a misconfiguration of your Xorg install. We would have discovered this if you'd run the requested test, but incidental self-report works just as well.
https://unix.stackexchange.com/questions/518362/whats-the-do...
I think the extra requirements aren't a problem on modern cards. However on lower end devices e.g. the older intel iGPUs, I could see this becoming an issue.
My money is on Wayland enabling the equivalent of this setting by default.
> I presume there was some tradeoff which...
Did you notice any problems after enabling the setting? If you didn't notice any problems, then why would you care about any hypothetical tradeoffs?
What hardware are you running on?
Among the many systems I have, I have a laptop running an Intel 945GM [0]. I don't see the behavior you're reporting even if I have it hooked up [1] to a 1080p external display. On that system, I have zero Xorg config files... it's all default settings.
I also don't see the behavior you report on any of my much more powerful systems.
[0] Integrated graphics chip released somewhere around 2006
[1] Via VGA cable!
And Wayland has been around for at least 15 years, btw, not 5. You'd think 15 years would be long enough to get something stable, but apparently not.
Unless you like your applications to save your window positions. I like Firefox to be on my left monitor, and if I use Wayland I have to manually drag it there every time I start it, because Wayland, in the year 2025, still lacks this basic feature that Windows, macOS, and X11 have had for like 40 years now.
(unless I use XWayland, which magically returns all of the missing functionality, though with a tendency to break other things)
It's nice to not have tearing. But IMO the functionality loss vs X11 isn't worth it for anything but a dedicated media playback/editing device.
If you're running AMD hardware, try enabling the TearFree option. [0] I've been using this for years and years and years and it works fine.
[0] See this for a config file you could plop into place: <https://news.ycombinator.com/item?id=44375247>
Yes, you can! Waypipe came out 6 years ago. Its express purpose is to make a Wayland equivalent to ssh -X. https://gitlab.freedesktop.org/mstoeckl/waypipe/
The problem is this usecase sucks major ass on X and has for a decade at least. It worked meh at one point, but as modern applications became more complex and X exploded in complexity it no longer makes any sense.
X is an unbelievably chatty protocol. Believe it or not, it's primarily meant to be run on a local socket, which is almost certainly memory mapped. Running it over the network has incredible latency, terrible lag spikes, and your windows will just kill themselves somewhat randomly.
There are newer remote desktop protocols which are literally just better.
FWIW, I do see screen tearing on my X11 multi-monitor setup. I just don't care.
If you ever get really bored one day, and you have nothing else to do, and you're using AMD/ATi hardware, try enabling the TearFree option for your video card driver. Something like
Section "Device"
Identifier "AMD"
Driver "amdgpu"
Option "TearFree" "on"
EndSection
in a new .conf file in '/etc/X11/xorg.conf.d' and a restart of your display server(s) should do the trick. It works fine for me, and has worked fine for like a decade or more.Not particularly if you are on a low latency network. Modern UI toolkits make applications way less responsive that classical X11 applications running across gigabit ethernet.
And even on a fast network the wayland alternative of 'use RDP' is almost unusable.
Also if you look at the source it's specifying direct rendering in glxCreateContext: https://humungus.tedunangst.com/r/xtoys/v/tip/f/circle.c & https://registry.khronos.org/OpenGL-Refpages/gl2.1/xhtml/glX...
The only thing leaving that process is a pixbuf, zero X11, identical to Wayland.
https://en.wikipedia.org/wiki/The_UNIX-HATERS_Handbook
https://web.archive.org/web/20201120053257/http://www.simson...
And a description of the screen spanning ruler app (or rather "Desk Accessory") that the Mac had around 1987 or so (I specifically remember Hugh showing it to me a few months before Black Monday).
https://en.wikipedia.org/wiki/Desk_accessory
https://en.wikipedia.org/wiki/Black_Monday_(1987)
Just like the article you apparently didn't read mentioned:
>With my new knowledge, I also wrote an onscreen ruler using the shape extension. Somewhat tautological for measuring the two inch circle, but in the event anyone asks, I can now tell them my terminal line height is 1/8”, and yes, I measured.
https://humungus.tedunangst.com/r/xtoys/v/tip/f/ruler.c
Now 38 years later, is there a ruler app for X-Windows that can span multiple screens yet? Or Wayland, even? Why don't you write one!
Oh shit, is that what the "next" button is for? TYVM, and I mean that unironically.
Apparently you have to be criticizing X11 for more than three decades now. Since you seem to know your stuff, could you please post a link to your git repository containing your personal display server that solves all the problems?
https://www.donhopkins.com/home/archive/NeWS/rms.news.txt
https://www.donhopkins.com/home/archive/NeWS/news-ooo-review...
https://www.donhopkins.com/home/archive/NeWS/questionaire.tx...
https://www.donhopkins.com/home/archive/NeWS/grasshopper.msg...
https://www.donhopkins.com/home/archive/NeWS/sevans.txt
https://www.donhopkins.com/home/archive/NeWS/Explanation.txt
But since you asked so kindly and said "please", here are some youtube videos, articles, and presentations -- thank you so much for asking and expressing your interest in learning and reading my ideas and code, I'm deeply flattered and delighted to oblige your request! ;)
You asked for it, you got it: Toyota!
https://www.youtube.com/watch?v=jVHg1CjqLk8
Do you have any interesting video demos you've recorded or papers or articles you've written that you'd like to share about your own ideas for personal display servers that solve all the problems too? Or comments on any of these?
Ben Shneiderman introduces Pie Menus developed by Don Hopkins at UMD Human Computer Interaction Lab:
https://www.youtube.com/watch?v=kxkfqgAkgDc
>University of Maryland Human Computer Interaction Lab Pie Menu Demos. Introduction by Ben Shneiderman. Research performed under the direction of Mark Weiser and Ben Shneiderman. Pie menus developed and demonstrated by Don Hopkins.
Brad Myers introduces Just the Pie Menus from All the Widgets:
https://www.youtube.com/watch?v=mOLS9I_tdKE
>Pie menu demo excerpts from "All The Widgets" CHI'90 Special Issue #57 ACM SIGGRAPH Video Review. Including Doug Engelbart's NLS demo and the credits. Tape produced by and narrated by Brad Meyers. Research performed under the direction of Mark Weiser and Ben Shneiderman. Pie menus developed and demonstrated by Don Hopkins.
Brad Myers CMU course 05-440 / 05-640: Interaction Techniques (IxT). Intended for Undergraduates, Masters and PhD students!
https://www.cs.cmu.edu/~bam/uicourse/05440inter/
>Don Hopkins was one of the original developers of Pie Menus, and helped contribute to their popularity in games. He published a frequently cited paper about pie menus at CHI'88 with John Raymond Callahan, Ben Shneiderman and Mark Weiser. He then developed and refined pie menus for many platforms and applications including window managers, the Emacs text editors, universal remote controls, TV guide browsers, web browsers, visual programming interfaces, SimCity, The Sims. These took advantage of many kinds of hardware including desktop, mobile, VR, OLPC, mouse, stylus and touch screens. He has published many free and open source software implementations of pie menus for X10, X11, NeWS, Tcl/TK, ScriptX, ActiveX, OpenLaszlo, Python, JavaScript, C#, and Unity3D.
https://www.cs.cmu.edu/~bam/uicourse/05440inter2019/schedule...
Video of Don Hopkins' presentation to Brad Myer's CMU IxT class: Pie Menus: Definition, Rules, and Future Directions:
https://scs.hosted.panopto.com/Panopto/Pages/Viewer.aspx?id=...
An Empirical Comparison of Pie vs. Linear Menus (ACM SIGCHI'88):
https://donhopkins.medium.com/an-empirical-comparison-of-pie...
Pie Menus: A 30 Year Retrospective. By Don Hopkins, Ground Up Software, May 15, 2018.
https://donhopkins.medium.com/pie-menus-936fed383ff1
The Design and Implementation of Pie Menus: They’re Fast, Easy, and Self-Revealing. Originally published in Dr. Dobb’s Journal, Dec. 1991, cover story, user interface issue.
https://donhopkins.medium.com/the-design-and-implementation-...
NeWS Tab Window Demo:
https://www.youtube.com/watch?v=tMcmQk-q0k4
>Demo of the Pie Menu Tab Window Manager for The NeWS Toolkit 2.0. Developed and demonstrated by Don Hopkins.
HCIL Demo - HyperTIES Browsing
https://www.youtube.com/watch?v=fZi4gUjaGAM
>Demo of NeWS based HyperTIES authoring tool, by Don Hopkins, at the University of Maryland Human Computer Interaction Lab.
HCIL Demo - HyperTIES Authoring with UniPress Emacs on NeWS:
https://www.youtube.com/watch?v=hhmU2B79EDU
>Demo of UniPress Emacs based HyperTIES authoring tool, by Don Hopkins, at the University of Maryland Human Computer Interaction Lab.
HyperTIES Discussions from Hacker News:
https://donhopkins.medium.com/hyperties-discussions-from-hac...
PSIBER Space Deck Demo:
https://www.youtube.com/watch?v=iuC_DDgQmsM
>Demo of the NeWS PSIBER Space Deck. Research performed under the direction of Mark Weiser and Ben Shneiderman. Developed and documented thanks to the support of John Gilmore and Julia Menapace. Developed and demonstrated by Don Hopkins.
Described in "The Shape of PSIBER Space: PostScript Interactive Bug Eradication Routines".
Micropolis Online (SimCity) Web Demo:
https://www.youtube.com/watch?v=8snnqQSI0GE
>A demo of the open source Micropolis Online game (based on the original SimCity Classic source code from Maxis), running on a web server, written in C++ and Python, and displaying in a web browser, written in OpenLaszlo and JavaScript, running in the Flash player. Developed by Don Hopkins.
Micropolis Web Demo 1:
https://www.youtube.com/watch?v=wlHGfNlE8Os
>Micropolis Web is the browser based version of Micropolis (open source SimCity), that uses WebAssembly, WebGL, and SvelteKit. Based on the original SimCity Classic code, designed by Will Wright, ported by Don Hopkins. This first demo shows an early version that runs the WebAssembly simulator and animates the tiles with WebGL, but most of the user interface is still a work in progress.
https://news.ycombinator.com/item?id=39432832
>Simon Schneegan's "Kandu" cross platform pie menus, as well as his older "Fly-Pie" and "Gnome-Pie" projects, let you create and edit your own pie menus with a WYSIWYG drag-and-drop direct manipulation interface. [...]
https://news.ycombinator.com/item?id=44090952
>Here's an X11 window manager, with pie menus and tabbed windows, entirely written in object oriented NeWS PostScript, from around 1991:
https://news.ycombinator.com/item?id=39432170
>Vertical tabs are better in some situations and for some users, horizontal tabs are better in other situations and for other users. So all users should be able to choose to place tabs along any side of any window, and change which side and what position any tab is at any time. Not just tabs for emacs frames or web browser windows, but for ALL windows including top level and internal application windows. And you should also be able to mix tabs from different apps in the same frame, of course. Why not?
>I implemented tabbed window with pie menus for UniPress Emacs in 1988, and still miss them! Later in 1990 I developed several other versions of tabbed windows with pie menus for NeWS that let you manage any NeWS and X11 windows, and drag the tabs around to any edge. [...]
https://news.ycombinator.com/item?id=39432449
>It bewilders me that any rational UI designer would be so arrogant as to make the unilateral unchangeable decision for all their users that they should only have tabs on one side, be it the top, bottom, left or right of the window. Why restrict users to using tabs on only one side and one side only? What's so special about that side, and bad about the other sides? What if the user is left handed, or has a tall monitor, or a wide monitor, or lots of windows, or only a few?
>While you're at it, why not just remove all the arrow keys from the keyboard except one? Then users can argue over whether the left-arrow key is better than the up-arrow key, and users who don't like having only an up-arrow key can buy a keyboard with only a left-arrow key.
>But all keyboards have all four arrow keys, so there are no arguments about which arrow is better: you just use whichever arrow you want, whenever you want.
>Most people prefer to use all four arrows at different times for different purposes, and put their tabs along all four edges, too!
https://news.ycombinator.com/item?id=44173283
>In practice that's what you could do with HyperLook on NeWS: SimCity, Cellular Automata, and Happy Tool for HyperLook (nee HyperNeWS (nee GoodNeWS))
>HyperLook was like HyperCard for NeWS, with PostScript graphics and scripting plus networking. Here are three unique and wacky examples that plug together to show what HyperNeWS was all about, and where we could go in the future!
https://donhopkins.medium.com/hyperlook-nee-hypernews-nee-go...
HyperLook SimCity Demo Transcript
https://donhopkins.medium.com/hyperlook-simcity-demo-transcr...
>This is a transcript of a video taped demonstration of SimCity on HyperLook in NeWS.
Discussion with Alan Kay about HyperLook and NeWS:
https://donhopkins.medium.com/alan-kay-on-should-web-browser...
>Alan Kay on “Should web browsers have stuck to being document viewers?” and a discussion of Smalltalk, HyperCard, NeWS, and HyperLook
https://news.ycombinator.com/item?id=22456831
>Thanks for asking! ;) I've put up some old demos on youtube, and made illustrated transcriptions of some, and written some papers and articles. Sorry the compression is so terrible on some of the videos. Here are some links: [...]
Lots of old NeWS code:
https://donhopkins.com/home/archive/NeWS/
Including the NeWS Tape, a big collection of free NeWS software that I curated and distributed via Sun Users Group:
https://donhopkins.com/home/archive/NeWS/news-tape/
And of course PizzaTool:
https://donhopkins.medium.com/the-story-of-sun-microsystems-...
NeWS PostScript PizzaTool Source Code:
https://www.donhopkins.com/home/archive/NeWS/pizzatool.txt
PizzaTool shipped with Solaris, and here's the manual entry I wrote:
https://www.donhopkins.com/home/archive/NeWS/pizzatool.6.txt
They are certainly not making any money with it right now. All patents should be expired by now. Have you ever sincerely asked if you are allowed to publish the code?
> Do you have any ... display servers that solve all the problems too?
X11 has extensions which correct for most of its original flaws. Most importantly XRandr (as mentioned in the article), DRI3 (fast hardware access) and XRender (accelerated drawing primitives that don't suck). With the exception for the availability of a decent toolkit and HDR extension X11 solves all the problems.
> Or comments on any of these?
Oh boy... well, you asked for it.
> Pie Menus
Great for demos and to collect grant money, i guess. But in principle a total anti-pattern. People read from left to right and from top to bottom. Traditional context menus are therefore far superior, especially for varying numbers of options.
> Weirdly dragable tabs
Not impressed at all, sorry. Creates much more visual confusion than generic title bars.
> HyperTIES
The most revolutionary component are links... which are not invented by HyperTIES.
> PSIBER
Genuinely very impressive. A proper visual postscript debugging tool. But also necessary for a rather unintuitive stack based language like postscript which is primarily designed to be machine readable.
> SimCity
Great Game. Thanks again for making that available to the FOSS community.
Traditional context menus suck on touchscreens. Pie menus support swiping naturally as an idiomatic interaction, which aligns with the most effective means of providing touchscreen input.
>They are certainly not making any money with it right now. All patents should be expired by now. Have you ever sincerely asked if you are allowed to publish the code?
Ha ha! Good luck, kiddo. Have you ever tried asking a lawnmower for favors? Do you really think "sincerity" would help?
https://news.ycombinator.com/item?id=15886728
https://youtu.be/-zRN7XLCRhc?t=33m1s
>X11 has extensions which correct for most of its original flaws.
Oh, then I guess there's no reason for Wayland, then. Have you broken the news to them? How did they react?
So is there an X-Windows extension yet that lets you download code into the window server where it can execute right next to the hardware and handle input events and draw interactive user interfaces locally without network traffic and context switching, and implement efficient application specific network protocols and rendering pipelines, just like NeWS?
Or, you know, like a web browser running an AJAX app like Google Maps? Certainly not Display PostScript, it can't do that, and nobody uses it any more for some reason or another.
>> Pie Menus
>Great for demos and to collect grant money, i guess. But in principle a total anti-pattern. People read from left to right and from top to bottom. Traditional context menus are therefore far superior, especially for varying numbers of options.
A lot more than grant money: The Sims has made EA $5 billion (as of 2019), putting pie menus into 70 million people's hands, and Blender and many other programs use them too. Have you ever played The Sims or used Blender?
https://fortune.com/2025/01/31/the-sims-25-anniversary/
What's your evidence for that claim that "Traditional context menus are therefore far superior"? Citations, or are you just bullshitting? Thanks to Fitts's Law, which every user interface designer should be familiar with, pie menus are much faster and have a significantly lower error rate than linear menus, so you're simply wrong about traditional menus being "far superior".
https://en.wikipedia.org/wiki/Fitts%27s_law
https://en.wikipedia.org/wiki/Pie_menu
We empirically proved that and published our findings at ACM SIGCHI in 1988, and since then many other people have performed controlled experiments replicating and elaborating our frequently cited results.
https://donhopkins.medium.com/an-empirical-comparison-of-pie...
>> Weirdly dragable tabs
>Not impressed at all, sorry. Creates much more visual confusion than generic title bars.
Have you ever used a web browser? I'm guessing you are using one right now! You must be pretty easily confused, so speak for yourself, please don't project your confusion onto others, we're doing just fine being not confused. Is the confusion in the room with you right now? ;)
Maybe you can reduce your confusion by reading the wikipedia article about tabbed windows. That screen dump in the article is an illustration of UniPress Emacs with tabbed windows and the HyperTIES hypermedia browser with pie menus and interactive PostScript "applets" (long before that term was coined for Java applets, or the term "AJAX" was coined for JavaScript web apps), which I developed for NeWS in 1988 or so. No coincidentally, James Gosling developed UniPress Emacs, NeWS, and Java.
https://en.wikipedia.org/wiki/Tab_(interface)
https://news.ycombinator.com/item?id=11483721
DonHopkins on April 12, 2016 | parent | context | favorite | on: NeWS – Network Extensible Window System
NeWS was not actually Adobe's Display PostScript, but it was Sun's independent implementation and specialized dialect of PostScript, supporting light weight processes, overlapping arbitrarily shaped canvases, window management, event distribution, garbage collection, networking, object oriented programming, etc.
The most important ability that NeWS had, but was missing from Display PostScript and its successors (OS/X Core Graphics, PDF, SVG, canvas API, etc), is the ability to download code to create an efficient custom high level application specific protocol between the client and server.
That essential ability is what people call "AJAX" these days, now that PostScript has been supplanted by JavaScript and a whole bunch of different APIs, and now we're even downloading shaders to the GPU! Truly exciting!
James Gosling chose PostScript from the start, for how its network programming ability dovetails with its graphics and data representation, instead of nailing it onto the side of a bunch of different technologies as an afterthought.
To quote the comparison from the wikipedia article:
NeWS was architecturally similar to what is now called AJAX, except that NeWS coherently:
1) used PostScript code instead of JavaScript for programming.
2) used PostScript graphics instead of DHTML and CSS for rendering.
3) used PostScript data instead of XML and JSON for data representation.
I really don't understand the X11 hate that keeps showing, it's old but it works. It shows my applications perfectly, I can do my video-editing and play games with Wine and Steam without issues.