Ports to workstations with inbuilt graphics hardware came later.
References:
At the beginning of the 90p's, I was on the hunt for an alternative to the MSDOS part when, eventually, I tried minix instead .. and that led to replacing it with Linux as soon as it was available on funet. Multiple runs to Fry's to get more RAM and some CPU upgrades later, and I was soon compiling an X/Windows setup on my brand new 486 with 16 Megabytes of RAM .. and about a week after that, I replaced my Quarterdeck setup with a functioning Linux workstation, thorns and warts and all. That was a nice kick in the pants of the operators who were threatening to take away my pizzabox, but it was short-lived joy, as not long thereafter I was able to afford an Indy, which served great for the purpose all through the 90's - and my Linux systems were relegated off the desktop to function as 'servers', once more.
But I always wondered about Quarterdecks' Desqview/X variant, and whether that would have been an alternative solution to the multi-term problem. It seems to me that this was available in 1987/88, which is odd given the articles' claims that X workstations weren't really widespread around that period.
Dv/X was remarkable tech, and if it had shipped earlier could have changed the course of the industry. Sadly, it came too late.
> It seems to me that this was available in 1987/88,
No. That is roughly when I entered the computer industry. Dv/X was rumoured then, but the state of the art was OS/2 1.1, released late 1988 and the first version of OS/2 with a GUI.
Dv/X was not released until about 5Y later:
https://winworldpc.com/product/desqview/desqview-x-1x
1992. That's the same year as Windows 3.1, but critically, Windows 3.0 was in 1990, 2 years earlier.
Windows 3.0 was a result of the flop of OS/2 1.x.
OS/2 1.x was a new 16-bit multitasking networking kernel -- but that meant new drivers.
MS discarded the radical new OS, it discarded networking completely (until later), and moved the multitasking into the GUI layer, allowing Win3 to run on top of the single-tasking MS-DOS kernel. That meant excellent compatibility: it ran on almost anything, can it could run almost all DOS apps, and multitask them. And thanks to a brilliant skunkworks project, mostly by one man, David Weise, assisted by Murray Sargent, it combined 3 separate products (Windows 2, Windows/286 and Windows/386) into a single product that ran on all 3 types of PC and took good advantage of all of them. I wrote about its development here: https://www.theregister.com/2025/01/18/how_windows_got_to_v3...
It also did bring in some of the GUI design from OS/2 1.1, mainly from 1.2, and 1.3 -- the Program Manager and File Manager UI, the proportional fonts, the fake-3D controls, some of the Control Panel, and so on. It kept the best user-facing parts and threw away the fancy invisible stuff underneath which was problematic.
Result: smash hit, redefined the PC market, and when Dv/X arrived it was doomed: too late, same as OS/2 2.0, which came out the same year as Dv/X.
If Dv/X had come out in the late 1980s, before Windows 3, it could have changed the way the PC industry went.
Dv/X combined the good bits of DOS, 386 memory management and multitasking, Unix networking and Unix GUIs into an interesting value proposition: network your DOS PCs with Unix boxes over Unix standards, get remote access to powerful Unix apps, and if vendors wanted, it enabled ports of Unix apps to this new multitasking networked DOS.
In the '80s that could have been a contender. Soon afterwards it was followed by Linux and the BSDs, which made that Unix stuff free and ran on the same kit. That would have been a great combination -- Dv/X PCs talking to BSD or Linux servers, when those Unix boxes didn't really have useful GUIs yet.
Windows 3 offered a different deal: it combined the good bits of DOS, OS/2 1.x's GUI, and Windows 2.x into a whole that ran on anything and could run old DOS apps and new GUI apps, side by side.
Networking didn't follow until Windows for Workgroups which followed Windows 3.1. Only businesses wanted that, so MS postponed it. Good move.
But sure, the definition of "X terminal" here is meant to mean dedicated hardware that runs an X server connecting to a remote X11 display manager, and nothing else. Those were always somewhat niche, in the same way that once terminal emulators existed, general purpose PCs displaced hardware terminals.
In the 1990s, my university used inexpensive diskless X86 PCs running X386 (predecessor of XFree86) with just a ramdisk, booted by DHCP / BOOTP / TFTP.
OTOH ... we had already started using the first Linux system at amazon by that time, and a few years later, when a 25MHz 486 running Redhat became the first computer I actually owned (I resisted for that long!), the idea of an X Terminal seemed a bit quaint and limited.
At the time, it was typical to assume that each sysadmin could look after a dozen machines on average, maybe twenty at best. So if each of those dozen machines could support 10-20 users on X terminals, then you'd only need a single sysadmin for every 250 users. That was a big cost saving vs having a dedicated workstation on every desk.
But in the end, DOS/Windows PCs had even bigger cost savings because most users could be expected to do minor admin tasks themselves supported by cheap IT helpdesk staff rather than expensive Unix greybeards.
Two of the universities in town had labs of them for students, all booted remotely, all the storage on a bigger Sun down in the server room, ugly coaxial ethernet everywhere and those funky blue/silver mouse pads and optical mice.
My boss at the time was pretty dark on Sun, because they sold her a lab full of Sun 3 workstations without telling her the Sparcstations would be released shortly afterwards.
If you think about a lab full of computers doing relatively simple Unix work, and how much money would be saved by just having a single drive (and all other things being equal, which they of course aren't), it's not trivial.
To be clear: Plan 9 is not limited to terminal-server setups. It can function just fine as a stand alone OS.
> As long as you have a CPU with an MMU and some RAM
Those weren't cheap at the time. If you read the Gnot terminal presentation (early Plan 9 terminal) it is stated that they were cheap enough so a user could have one at home and one at work. It also stated that some things could run locally like the text editor and compute intensive tasks like compiling could be exported to a big expensive CPU servers. These machines had a few megs of ram and a 68000 CPU and monochrome graphics. The CPU servers were Sun, DEC, SGI, etc, machines that users could certainly not afford one of, let alone two.
Proving this point, there are VNC client implementations that can run on MS-DOS machines.
It is sort of like the anecdote about an early sys-admin who traced down a problem with the new department laser printer locking up for hours to one engineer who had to be told to knock it off when he explained that he was printing nothing, But the printer had, by far, the most powerful CPU in the building so he ported all his simulation programs to postscript and was running them on the printer.
They really didn't have that. Largely Unix workstations running X had a graphics stack that almost entirely software with no or little hardware acceleration. What made them workstations compared to PCs was the large "high" resolution monitors. The DEC lab at my university consisted of DECstation 3100s (16 MHz MIPS R2000 with 16 MB RAM and an 8-bit bitmapped display with no hardware acceleration.) The engineering department had labs with Sun and RS/6000 machines.
Commodity PCs were 386s with 4-8 MB RAM and monitors that would do 640x480 or 800x600 and video cards that would do 8 or 15/16 bpp. A great day was when someone put a linux kernel and XFree86 on a 1.2 MB floppy that could use XDMCP to connect to the DECs or Suns to turn any PC in the PC labs into an X terminal.
As a one-time uni sysadmin back in the day, our EE lab(s) we had students running Matlab on a Sun E3500 with the display going up on a diskless ~10 year old Sun SparStation 5s that we had lying around (originally from the early 1990s).
That’s enough punishment in itself.
I enjoyed using blits and the early dec Ultrix workstations.
Thin X terminals were super cool. But, also really stressed out your Ethernet, and because we didn't have good audio models in X at that time, when multimedia became viable they stopped being as useful. But for a distraction free multiple term, low overhead wm world... super good price performance cost.
And then... Came the internet. People suddenly started running NCSA Mosaic in droves that bogged down the single core server. And those browsers started to push lots of bitmap stuff through the pipe to the terminals. Now that was bad, yes. When Netscape came with its image backgrounds and even heavier process people started moving away to the PC rooms :( Because all scroll content needed to be bitstreamed then.
Ps video content at that time wasn't even a thing yet. That came a bit later with realvideo first.
But there was a time when X terminals were more than sufficient, probably for a decade or so.
Is it better now? Can a browser locally scroll an image, without restreaming it?
Here is an awesome (slightly outdated) talk about the architecture: https://groups.google.com/a/chromium.org/g/blink-dev/c/AK_rw...
The basic idea is that HTML content is drawn in transparent 'tiles' which are layered on top of one another. When the user scrolls, the tiles don't need to be redrawn, but instead just re-composited at their new positions. GPU's are super fast at that, and even a 15 year old GPU can easily do this for tens of layers at 60 FPS.
On a linux with a remote X server, I think the tiles would all end up on the X server, with only the pretty small 'draw tile number 22 at this location' going across the network. So the answer to your question is 'yes'.
Based on my limited experience, the performance of running Firefox remotely on a local X11 server was very poor, and I assumed that the absence of these types of acceleration were to blame.
I could imagine XRender to work, though, which would at least support blitting most of the pixels up/down in case of scrolling, and would only require pushing new ones over the network for any newly exposed areas.
My guess is the lack of shared memory buffers stops the use of opengl, and whilst it's theoretically possible, it is probably unimplemented because nobody does that in 2025.
IMHO, widespread use of Xlib lead people to believe that X was much more synchronous than it is, and discouraged people from using networking, and then a lot of stuff that could be possible didn't get made. xcb is a much better way to interact with X, but it may have arrived too late.
But this requires the browser have a special path for remote X and not just use the GPU. Or even just a path for X that lets the X Server do the rendering.
I remember GTK 1 was well optimized for X and you could run GTK applications over slow modem lines quite comfortably. GTK 2 went a different direction entirely and became almost unusable over the Internet. I doubt GTK 3 or 4 are any better now that they're designed for compositors.
Your X server (e.g. a X terminal) could display applications running on a variety of other vendors' hardware and operating systems. The specification enabled this interoperability. Apps running on SunOS could display on VAX workstations, and vice versa (as long as you had TCP/IP installed!)
The advantage X terminals had was that they were relatively inexpensive to buy and operate. Most did not require management, however the CPU cost moved into a computer room and you needed a pretty fast network to get reasonable performance.
This was not really true. Those terminals were often extremely expensive compared to "off the shelf" PCs of the time. They required decent CPUs and memory (this was before hardware acceleration) as well as (for the time) decent networking hardware to drive it at scale for larger institutions. On top of that they were usually connected to and had to drive very high res monitors, which weren't cheap either (anecdotally, the first time I "surfed" the web was at my mom's work at a telco lab on HP-UX machines in ~1993-94; when we later got the internet at home I hated doing it on the 800x600 monitor we had).
As you alluded to, what it did provide was a central way to administer software and reduce licensing costs, which pre-2000 was almost all commercial - companies were loath to buy multiple commercial compilers/matlab/etc (and the software vendors took awhile to update their models, too). In those days sysadmins often did things by hand and user management alone was far easier on a central mainframe/server. It also did allow some vendor interoperability as you mentioned.
"Dumb" text terminals were also the way that things were already done, so they just continued on that model with a GUI until the benefits of local computing became so much more economical. In large orgs, this usually was around when windows started to become prevalent over the course of the 1990s (sometimes with X-servers or terminal clients on windows to replace the physical terminals).
If memory serves me right, we had four of them on the student lab.
Everyone else could enjoy connecting to DG/UX via terminal app on Windows for Workgroups, or the older green and ambar text based phosphor terminals.
As anecdote, those big screen X terminals were quite often used to have four parallel sessions of mixes using talk and some MUD game.
For all its "bloat", X could support a very sophisticated GUI -- over the network -- on very limited hardware by the standards of 30 years ago, let alone today.
Great times.
We do it as an extreme form of access control. Our workstations cannot reach any of our systems. Thus if a laptop is stolen, nothing if real value is lost.