VMS(and the hardware it runs on) takes the opposite approach. Keep everything alive forever, even with hardware failures.
So the VMS machines of the day had dual redundant everything, including interconnected memory across machines and SCSI interconnects and everything you could think of was redundant.
VMS clusters could be configured in a hot/hot standby situation, where 2 identical cabinets full of redundant hardware could failover during an instruction and keep going. You can't do that with the modern approach. The documentation was an entire wall of office bookcase almost clear full of books. There was a lot of documentation.
These days, nothing is redundant inside the box level usually, we instead duplicate the boxes and make them cheap sheep, a dime a dozen.
Which approach is better? That's a great question. I'm not aware of any academic exercises on the topic.
All that said, most people don't need decade long uptimes. Even the big clouds don't bother with trying to get to decade long uptimes, as they regularly have outages.
The daughter board in that machine could have RAM or CPUs in the same slot and it was changeable without reboots!
I used it extensively in the late 90's early 00's and really liked it. As a newb sysadmin at the time, the built-in versioning on the fs saved me from more than one self-inflicted fsck up.
I can't imagine there would be any green-field deployments in the last 10 years or so - I'm guessing it's just supporting legacy environments.
This is not entirely the case.
I have been writing about VMS for years. The first x86-64 edition, version 9, was released in 2020:
https://www.theregister.com/2022/05/10/openvms_92/
Version 9.0 was essentially a test. 9.1 in 2021 was another test and v9.2 in 2022 was production-ready.
There's no new Itanium or Alpha hardware, and version 8.x runs on nothing else. Presumably v9.x is selling well enough to keep the company alive because it's been shipping new versions for a while now.
Totally new greenfield deployments? Probably few. But new installs of the new version, surely, yes, because VMS 9 doesn't run on any legacy kit, so these must be new deployments.
It's been growing for a few years. Maybe not growing much but a major new version and multiple point releases means somebody is buying it and deploying it. Never mind no new deployments in a decade... more new deployments in the last few years than in the previous decade.
HP tried to kill it. Somewhere in the neighborhood of 10 years ago they announced the EOL. This company - VMS Software Inc (VSI) was formed specifically to buy the rights and maintain/port it. So you have an interesting situation.
Old VAX and Alpha systems are supported, supposedly indefinitely, but if you have an Itanium system it has to be newer than a certain age. HP didn’t sell the rights to support the older Itaniums, and no longer issues licenses for them. So there is a VMS hardware age gap. Really old is ok. Really new is ok.
Version 9.x has been out for 5 years, stable for 3, and primarily targets and supports hypervisors. It knows about and directly supports VMware, Hyper-V and KVM.
So, yes, get a generic x86-64 box, bung one of the big 3 hypervisors on it, and bang, you are ready to run VMS 9.
MCP Release 21 came out in mid-2023, and release 22 is supposed to be out middle of this year, with further releases planned: https://www.unisys.com/siteassets/microsites/clearpath-futur...
Looking at new features, they seem to be mainly around security (code signing, post quantum crypto) and improved support for running in cloud environments (with the physical mainframe CPU replaced by a software emulator)
Unisys’ other mainframe platform, OS 2200 is still around too, and seems to follow a similar release schedule - https://www.unisys.com/siteassets/microsites/clearpath-futur... - although I get the impression there are more MCP sites remaining than OS 2200 sites?
Also, I noted in those two roadmaps that they offered continuity - Clear Path Forward -> "Don't worry about migrating or refactoring your apps", but also stated that "none of these new features are guaranteed to show up, and if that damages your company financially, it's not our fault".
I don't know if this is just a standard legal cop-out
I know the Michigan state government uses Unisys MCP (I don’t know for what): https://www.michigan.gov/-/media/Project/Websites/dtmb/Procu...
In 2023, NY State Education Department had an RFP to build a replacement for their Unisys MCP-based grants admin system with a modern non-mainframe solution (don’t know current status of that project): https://www.nysed.gov/sites/default/files/programs/funding-o...
It is generally easier to find out who government users are because they are often required to publish contracts with the mainframe vendor, RFPs for replacement systems or services, etc. (Exception is some national security users where the existence of the system and/or the tech stack it runs on may be classified.) By contrast, private companies, that kind of info is usually only available under NDA - obscure legacy systems is the kind of “dirty laundry” a lot of business don’t want publicly aired
In 2013, it was reported in the media that the Australian retailer Coogans was one of the last (maybe the last?) Unisys mainframe sites in Australia - https://www.smh.com.au/technology/tassie-retailer-rejects-cl... - I don’t know if they kept their mainframe after that or got rid of it, but in 2019 they went out of business - https://www.abc.net.au/news/2019-03-12/hobart-retailer-cooga...
> but also stated that "none of these new features are guaranteed to show up, and if that damages your company financially, it's not our fault".
> I don't know if this is just a standard legal cop-out
I’m pretty sure that’s just the “standard legal cop-out” - lots of vendors put language like that in their roadmaps, to make it harder for customers to sue them if delivery is delayed or if the planned next version ends up being cancelled
The corpse of OpenVMS on the other hand is being reanimated and tinkered with, presumably paid for by whatever remaining support contracts exist, and also presumably to keep the core engineers occupied with inevitably fruitless busywork while occasionally performing the contractually required on-call technomancy on the few remaining Alpha systems.
VMS is dead... and buried, deep.
It's a shame it can't be open-sourced, just like Netware won't be open-sourced, and probably has less chance of being used for new projects than RiscOS or AmigaOS.
It's in active development. They're putting out new versions and selling licenses.
There are much deader OSes out there than VMS, such as Netware.
I suspect that there are more fresh deployments than there are of Xinuos's catalogue: OpenServer 5, 6, and UnixWare 7.
https://www.xinuos.com/products/
Last updated 2018...
No, there is no reason to do a greenfield VMS deployment and hasn't been for a long time.
> I've heard its reliability is legendary, but I've never tried it myself.
I've heard the same things but I am doubtful as to their veracity in a modern context. Those claims sound like they come from an era where VMS was still a cutting-edge and competitive product. I'm sure VMS on vaxclusters had impressive reliability in the 1980s, but I doubt it's anything special today. If you look at the companies and institutions that need performance and high reliability today (e.g. Hyperscaler companies or the TOP500) they are all using the same thing: Linux on clusters of x86-64 machines.
With cloud computing reliability is achieved through software, distributed software which needs to be horizontal.
On a mainframe reliability is achieved through hardware (at least as fast as user software is concerned), and the software is vertical.
If you need to run vertical, single-system image software, the cloud is useless for making it reliable.
Systems built on the cloud are reliable only insofar as people can write reliable distributed systems which assume components will fail. It is not reliable if you can't, or don't want to write software like that (which carries a significant engineering cost).
The real reason to avoid mainframes (and VMS) is vendor lock-in, not technological.
On one hand, I don't see many of the modern services having years to decades of uptime. Clustering is also bolted onto many products while not available for most products. These were normal for OpenVMS deployments. Seems like a safer bet in that regard.
If people have $$$, which VMS requires for such goals, they can hire the type of sysadmins and programmers who can do the same in Nix' systems. The number of components matching VMS's prior advantages increases annually. Also, these are often open source with corresponding advantages for maintenance and extensions.
The other thing I notice is VMS systems appear to be used in constrained ways compared to how cloud companies use Linux. It might be more reliable because users stay on the happy path. Linux apps keep taking risks to innovate. FreeBSD is a nice compromise for people wanting more stability or reliability with commodity hardware.
Then, you have operating systems whose designs far exceed VMS in architectural reliability. INTEGRITY RTOS, QNX, and LynxOS-178B come to mind. People willing to do custom, proprietary systems are safer building on those.
I'm curious about running a VMS system although the admin side looks a bit daunting. The thing I'd really like to do is run X-Windows on an emulator on my home lab, just to see it run.
VMS' key feature over Unix is consistency and beyond that it is being interrupt driven at all levels (no wasted cycled polling except for code ported over using POSIX interfaces). VMS was killed by a confluence of business issues, not because it was obsolete or inefficient.
It's interesting in a "what if/parallel universe" kind of way, but I certainly wouldn't touch it for anything new with that licensing.
I was just a lowly kid programmer working on a side project, so I can't tell you whether it's still uniquely good at something to justify its usage today. It worked. But it was weird and arcane (not that Unix isn't, but Unix won) and using it today for a new project would come with a lot of friction.
Does anyone know whether he is still working in Microsoft? What does it feel to work with him?
Reminds me of Coders At Work, by Peter Seibel, which I read right around the time that I decided to get deeper into software. Being able to read or hear about the process that went on in someone's head while developing something so major was and is still impressive, and motivating.
I like to imagine there’s an inner sanctum in a secure sub-basement of Microsoft where a couple dozen cracked kernel developers work quietly… except when Dave Cutler asks them to come into his personal lab through the three foot thick blast doors and man-trap so he can yell at them about a bug he found.
I also know that some Prism code was used in NT but again I hardly see why that brought down DEC.
That could be the cause, if DEC had some competitive hardware or software projects, but none that I know so far. Please share your knowledge with us.
The story as far as I know, goes like this
Back in the late 1970s Dave Cutler and his team create VMS at DEC as the next generation operating system for DEC's new flagship product, the VAX minicomputer.
VMS is good by all accounts and was a successful product, but Unix goes on to dominate the minicomputer and emerging server market for the next decade.
Then in the 1990s DEC goes out of business and sells VMS to Compaq, but not before porting it to their doomed Alpha CPU architecture.
Then in 2000s Compaq goes out of business and gets acquired by HP, and together they port VMS to the doomed Itanium CPU architecture.
In 2014, a shop called VMS Software Inc (VSI) strikes some kind of deal with HP where they get to develop and support new versions of VMS while older versions continue to belong to HP. As part of this, they finally announce an x86-64 port. This port first sees the light of day in 2020.
----
The story is essentially bad bet after bad bet, missing the boat and fighting the last war over and over again. And today, it's just a piece of legacy software being used to extract the last bits of value from the organizations that are stuck with it.
Even so, I hope for a true open source release of it one day.
Not technically (Alpha ISA had its good and bad sides, but was decent enough), but economically. DEC just didn't have the marketshare and thus economic muscle to survive in a game of ever increasing R&D costs for each successive generation. Hence DEC ending up acquired by Compaq, which then was acquired by HP.
HP also saw the writing on the wall, and developed Itanium with Intel as a replacement for their PA-RISC, thinking that Itanium could benefit from Intel's huge economy of scale in chip manufacturing. And after acquiring Compaq (with DEC Alpha) it sunset the Alpha as well in favor of Itanium, for the same reasons. Well, we all know how the Itanium story turned out.
Alpha boxes were cool. High clock speeds, massive amounts of RAM does the time, and huge storage. When they were the only 64 bit systems, they were the only game in town for some workloads.
They were never the only 64 bit systems. MIPS introduced their 64 bit R4000 in 1991, a year before the Alpha came out. Sun released the 64 bit UltraSPARC in '95, along with IBM's 64bit PowerPC AS for their AS/400 systems. HP released the 64bit version of their PA-RISC in 1996.
Wasn't Alpha also a fairly pure RISC architecture, with larger machine code being an inherent property of that?
https://www.informationweek.com/it-leadership/compaq-to-aban...
ARM's and POWER's success suggests Alpha might have made it. Compaq wanted to partner on Itanium, though. Eventually, Intel got the Alpha I.P. rights which might as well have been a death sentence.
Last Alpha I saw was the SAFE architecture that added security features to a homebrew CPU that was derived from Alpha ISA. What I liked most on Alpha, though, was PALcode with its atomic execution.
Only IBM survived, and that’s because it won key contracts in the 60s and 70s to run verticals and business systems, and essentially leveraged mainframe financing and legacy contracts to cross-sell everything. On the tech side, they parlayed that into a sustainable business by virtualizing everything and sharing the Power platform. They get some new business for AIX, but it’s mostly that legacy business.
A good chunk of DEC’s and Compaq’s Business was running terminal (as in tty) operations for mainframes. That went endangered with NT 3.5 and extinct with NT 4. As Linux improved, Intel was good enough. ARM is doing to Intel what Intel did to everyone.
That is not really accurate or representative at all, no.
And interesting factoid about the x86-64 port is that they've switched to LLVM-based compilers rather than making x86-64 backends for their legacy compilers.
> Back in the late 1970s Dave Cutler and his team create VMS at DEC as the next generation operating system for DEC's new flagship product, the VAX minicomputer.
Not exactly.
In the '60s DEC made several of the leading minicomputers. One was a 16-bit box, the PDP-11, a critical machine in the history of Unix as the first new platform it was ported to.
(It was written on an 18-bit DEC mini, the PDP-7. Part of the reason that the PDP-11 got big was that the industry was moving to 8-bit bytes and 16-bit words.)
The VAX was the 32-bit extended version of the PDP-11. It added virtual memory: VAX stands for Virtual Address Extension.
Cutler wrote one of the most successful PDP-11 OSes, RSX-11. He was famously much faster than rival teams, so got the job of writing a 32-bit OS for the new 32-bit machine.
> VMS is good by all accounts and was a successful product, but Unix goes on to dominate the minicomputer and emerging server market for the next decade.
Not really. VMS 1.0 was 1977. Its clustering is still the best-of-breed today, able to present multiple heterogenous machines (VAX, Alpha, Itanium and x86-64) as a single virtual host on the network, with multiple nodes sharing drives, with nodes able to join and leave while the cluster stays up. Uptimes in decades are normal with OS upgrades happening in that time.
DEC enjoyed 10-15yr of dominance in its sector before Unix started to become much of a threat. The first SUN workstation wasn't for 5 years yet. The first SPARC not for another 12yr.
> Then in the 1990s DEC goes out of business
Nope nope nope.
Cutler proposes a plan: a successor to the VAX, a 32-bit RISC chip (PRISM), and a successor to VMS, a multi-personality OS (MICA) to run on it.
DEC says no. It does not believe that microcomputers and Unix are threats, and it spends $ _billions_ on VAX 9000, a series of mainframe-class VAX machines. By the time they eventually ship, the performance is uncompetitive.
Mind you while it's doing this it's selling tons of VAXes including high-end workstations; I bought several and ran clusters of them.
Microsoft headhunts Cutler and his core team with him. It gives him OS/2 3.0 to finish, the portable (CPU-independent) version. They built it on Intel's next-gen RISC chip, i860: the x86 times ten. Codename: N-10. The OS is renamed OS/2 NT for N-Ten.
Note: officially denied now, yes, I am fully aware. Don't believe everything you hear.
Cutler implements his planned MICA multi-personality OS, able to emulate other OSes at kernel level, as NT. Most OS/2 stuff is junked but at launch it can format and use OS/2 HPFS disks and run OS/2 text-mode binaries, and an optional add-on to run Presentation Manager GUI apps is available.
DEC rescues PRISM, upgrades it to 64-bit, and calls it Alpha. Fastest CPU in the industry and the first 64-bit single-chip processor. Runs Unix, VMS, and Windows NT. First non-x86 chip to get Linux ported to it.
DEC also uses this experience to design the first superscalar ARM, called StrongARM.
DEC also gets a very sweet deal on NT for Alpha; the rumour is that DEC has proof that Cutler used MICA code in NT and has MS over a barrel.
DEC remains a major industry force. It also makes networking kit, printers, HDD and tape drives, Ethernet chipsets, PCs -- it's almost a one-stop shop. You can built an entire enterprise network entirely from DEC kit from the screens to the keyboards to the Ethernet switches. I did.
> sells VMS to Compaq, but not before porting it to their doomed Alpha CPU architecture.
Way off. Not even close.
DEC's lost MICA project, now called Windows NT, eats into its revenues. It loses market share to cheap x86 PCs and an OS based on a DEC design.
Compaq buys DEC. It's too big to digest and Compaq gets in trouble.
> Then in 2000s Compaq goes out of business and gets acquired by HP, and together they port VMS to the doomed Itanium CPU architecture.
Not really, no.
Cash-rich HP, which has lots of experience with managing non-x86 lines, acquires one of its biggest competitors in the x86 space, which has zero such experience.
HP buys Compaq. HP has its own RISC chip, its own Unix, its own fault-tolerant systems, all kinds of legacy stuff. It is quite used to killing old product lines. It also has a high-end enterprise email server that is compatible with MS Exchange.
HP makes good money from its partnership with MS, though.
So, HP kills HP OpenMail and sells the IP to Samsung, trades Alpha to Intel in return for killing its RISC chip... it goes all-in on MS and being the premium enterprise MS partner. Anything that rivals anything from Intel or Microsoft, HP kills.
HP works with Intel to make an EPIC chip that it tells customers will replace its PA-RISC.
HP merges the surviving DEC enterprise (non-x86) kit into its enterprise lines.
It announces it's killing VMS.
I wrote this: https://www.theregister.com/2013/06/10/openvms_death_notice/
There is a big customer outcry.
> In 2014, a shop called VMS Software Inc (VSI) strikes some kind of deal with HP where they get to develop and support new versions of VMS while older versions continue to belong to HP. As part of this, they finally announce an x86-64 port. This port first sees the light of day in 2020.
HP spins off VMS to a new company.
https://www.theregister.com/2014/07/31/openvms_spared/
As there is no new Alpha or Itanium kit, the new company's proposition is to help customers nurse VMS on Alpha or Itanium until it has an x86-64 VMS.
It delivers this by 2020.
Which were these? I didn't know HP had a fault-tolerant line.
I know Compaq purchased Tandem Computers with their fault-tolerant NonStop systems, and they intended to port it from MIPS to Alpha.
Mind, I honestly don't know anything about the details of ACPI.
But, seems like a lot to me.
Also, we don't know how much of that is test code, samples (for testing, for example)
There’s a ton of different tables you have to parse and as I recall there’s a whole bytecode you need to be able to execute.
really "user friendly". and then they're wining that nobody contributes to the opensource for VMS.
Yep, one of the first versions of the x86 version one could download everything and it was planed to renew the license once a year. They then canceled the the license after only one year to provide a new image every year (as if i want to reconfigure my system every year).
That's not how you attract dev's or users.
>Despite our initial aspirations for robust community engagement, the reality has fallen short of our expectations. The level of participation in activities such as contributing open-source software, creating wiki articles, and providing assistance on forums has not matched the scale of the program.
The "aspiration" lasted for a whole year ;)