Having started in 8-bit microcomputers and progressing to various desktop platforms and servers, mainframes were esoteric hulking beasts that were fascinating but remained mysterious to me. In recent years I've started expanding my appreciation of classic mainframes and minis through reading blogs and retro history. This IEEE retrospective on the creation of the IBM 360 was eye-opening. https://spectrum.ieee.org/building-the-system360-mainframe-n...
Having read pretty deeply on the evolution of early computers from the ENIAC era through Whirlwind, CDC, early Cray and DEC, I was familiar with the broad strokes but I never fully appreciated how much the IBM 360 was a major step change in both software and hardware architecture. It's also a dramatic story because it's rare for a decades-old company as successful and massive as IBM to take such a huge "bet the company" risk. The sheer size and breadth of the 360 effort as well as its long-term success profoundly impacted the future of computing. It's interesting seeing how architectural concepts from the 360 (as well as DEC's popular PDP-8, 10 and 11) went on to influence the design of early CPUs and microcomputers. The engineers and hobbyists creating early micros had learned computers in the late 60s and early 70s mostly on the 360s and PDPs which were ubiquitous in universities.
https://direct.mit.edu/books/monograph/4262/IBM-s-360-and-Ea...
After reading the IEEE article I linked above, I got the book the article was based on ("IBM: The Rise and Fall and Reinvention of a Global Icon"). While it's a thorough recounting of IBM's storied history, it wasn't what I was looking for. The author specifically says his focus was not the technical details as he felt too much had been written from that perspective. Instead that book was a more traditional historian's analysis which I found kind of dry.
Too much technical detail? Not for technologists - that's the interesting bit!
There are several drawbacks to maintaining this kind of compatibility but, nevertheless, it's impressive.
I find mainframes fascinating, but I'm so unfamiliar with them that I don't know what or why I'd ever use one for (as opposed to "traditional" hardware or cloud services).
ESPOL/NEWP is one of the very first systems programming languages, being safe by default, with unsafe code blocks.
The whole OS was designed with security first in mind, think Rust in 1961, thus their customers are companies that take this very seriously, not only running COBOL.
The motto is unsurpassed security.
https://www.unisys.com/product-info-sheet/ecs/clearpath-mast...
I think you are exaggerating that selling point – maybe historically that was true, but nowadays nobody is running MCP because it is more secure than any alternative, they are running it because it is a legacy system and migrating off it is too hard or expensive (at least for now), or the migration project is still underway or stuck in development hell, or they tried migrating off it before and the migration project failed.
People who are shopping for the highest security money can buy would be looking at something like BAE Systems XTS-400, not Unisys MCP. (Or seL4, which is open source and hence free, but you'll spend $$$$ on the custom work required to make it do anything useful.)
Especially since MCP now runs as a software CPU emulator under x86-64 Linux, and Unisys has done a lot of work to enable Linux and MCP processes to seamlessly talk to each other – so you can create hybrid apps which retain the MCP core but add components written in Java/etc running directly under Linux – that makes it really hard for contemporary MCP to provide much more security than the host Linux system does.
Also I didn't came up with this myself, it is part of their marketing materials and white papers.
Finally, if it was worthless they would have probably dropped it by now.
> Finally, if it was worthless they would have probably dropped it by now.
Companies love to "talk up" their products in marketing materials. It is far from uncommon for those materials to contain claims which, while not entirely false, aren't exactly true either – and I suspect that's what's happening here.
IBM does the same thing – listen to some IBM i person tell you how "advanced" their operating system is compared to everything else – sure, there's some theoretical truth to that, but in practical terms it is more true in the past than in the present
That would like saying Java/Android and .NET applications, or IBM i, are running under an emulator, even though technically a dynamic compiler is a form of emulation.
Compare that to a software emulator running under a commodity general-purpose operating system-it is a lot further from the bare metal, once you consider all the layers in-between (the OS kernel, libc, etc), the trusted computing base is a lot larger: its size has grown dramatically, being general-purpose includes lots of features the emulator doesn’t need or use - so from a security viewpoint, this is in some respects a step backwards - even though made necessary by economics, and simultaneously has some practical security benefits - although the general purpose OS may be theoretically worse from a security perspective, it receives huge amounts of attention, which helps keep it secure; a rarely used proprietary platform, whatever its theoretical advantages, doesn’t receive the same attention, making it more likely vulnerabilities may lurk undiscovered
Otherwise your suggestion seems similar to recommending the latest Itanium by HP (around 2017) running OpenVMS. Which probably would be much faster.
Unisys MCP now runs on Azure, so it would run on whatever Azure uses.
I think both work on commodity x86-64 server hardware; in BAE STOP's case, you need to use the exact hardware in the security evaluation for that evaluation to fully apply; otherwise, you may need to do further analysis and get approval to deviate from it, depending on the policies of the client organisation.
Unisys MCP lacks Common Criteria evaluation (unlike say Red Hat Enterprise Linux), so even if you believe it is more secure than mainstream alternatives, there is no evidence any third party has done any security evaluation to confirm that. (Maybe some really old version did receive a security evaluation, but that has limited relevance to current ones.)
[0] https://www.commoncriteriaportal.org/nfs/ccpfiles/files/epfi...
> In the rare event of a PU failure, one of the spare PUs is immediately and transparently activated and assigned the characteristics of the failing PU. Two spare PUs are always available on an IBM z17 ME1.
PU = Processor Unit, a common name for the CP/IFL/etc variants above.
So, you can tolerate minimum 2 CPU failures (money gets more) without shutting down. Beyond that, you can move active workloads transparently to a (possibly remote) zSeries, to schedule maintenance.
(3.4mb PDF) https://www.redbooks.ibm.com/redbooks/pdfs/sg248580.pdf
> Four Power Supply Units (PSUs) that provide power to the CPC drawer and are accessible from the rear. Loss of one PSU leaves enough power to satisfy the power requirements of the entire drawer. The PSUs can be concurrently maintained
I would think their customers would demand zero downtime. And hey - if BeOS could power down CPUs almost 30 years ago I would expect a modern mainframe to be able to do it.
(I'm pretty sure BeOS never actually powered off CPUs; it just didn't schedule anything. Linux "hotplug" works the same way today.)
I don't know about physically removing a drawer, but on IBM Z, if there is a un-recoverable processor error, it will be shut down, and another spare processor brought on-line to take over, transparently.
I don't know how licensing/costs ties into the CPU/RAM spares.
Nowadays it is a software emulator that runs under Linux – so if the Linux kernel and hardware you are running it on supports CPU hot-swap, then the underlying OS will. I believe at one point Unisys would only let you run it on their own branded x86 servers, they now let people run it in Azure, and I'm sure Microsoft isn't using Unisys hardware for that.
Running Linux in a VM, the hypervisor can implement hotplug whether or not the underlying server hardware physically does. Of course, without physical CPU hot-swap, it may not add much to reliability, but it still can potentially help with scaling.
If you hotplug a new virtual CPU into the Linux VM, you'd probably want to hotplug another emulated mainframe CPU into the mainframe CPU emulator at the same time. No idea if Unisys actually supports that, but they easily could, it is just software – the Linux kernel sends a udev event to userspace when CPUs are plugged/unplugged, and you could use that to propagate the same event into the mainframe emulator.
Large institutions (corporations, governments) that have existed more than a couple decades, and have large-scale mission-critical batch processes that run on them, where the core function is relatively consistent over time. Very few, if any, new processes are automated on mainframes most of these places, and even new requirements for the processes that depend on the mainframe may be built in other systems that process data before or after the mainframe workflows, but the cost and risk of replacing the well-known, finely-tuned-by-years of ironing out misbehavior, battle-tested systems often isn't warranted without some large scale requirements change that invalidates the basic premises of the system. So, they stay around.
I think most mainframe applications involve OLTP not just batch processing – commonly a hybrid of both. e.g. around a decade ago I did some work for a large bank which used CSC/DXC/Luxsoft Hogan as their core banking system – that's a COBOL application that runs under IBM CICS for transaction processing, although I'm sure it had some batch jobs in it too.
(I myself didn't touch any of the mainframe stuff, I was actually helping the project to migrate some of its functions to a Java-based solution that ran on Linux – no idea what its current status of it all is a decade on.)
It feels like I must be missing something, or maybe just underestimating how much money is involved in this legacy business.
They all can migrate their apps off the mainframes. It's just that it's cheaper to continue paying for the machines.
A couple of years ago, Australia cancelled its multi-billion dollar project to move off it on to a Linux-based solution (Pegasystems), after having spent over US$120 million on it. The problem was that although the new system did the calculations correctly, it took minutes to process a single individual, something the mainframe could do in 2 seconds.
But, I'm 100% sure this was nothing to do with the inherent nature of the problem they were trying they were trying to solve – I think it was likely because they'd picked the wrong off-the-shelf product as a replacement and the wrong team to implement it, and if they'd gone with different vendors it could well have succeeded – but after spending years and over US$100 million to discover that, they aren't keen to try again.
Build a database the same size with fake but realistic data. Then leave to the competitors match the constraints.
Actually, I’d love to take part of a challenge like this.
The part that still relies on the mainframe is the entitlement calculation - all the very complex rules which determine what payments each claimant is entitled to. Other aspects have already been moved to (or at least duplicated in) non-mainframe systems, e.g. SAP CRM. Those entitlement rules are written in SOUL, Model 204’s 4GL; a team of programmers in Canberra are kept busy constantly translating legislative updates into SOUL (the government can’t resist the urge to constantly tinker with the details of social security law, so almost every year brings at least a few minor changes, and every few years major ones).
Since this is basically business rules, they decided to use a Java-based low-code/no-code business rules automation platform as a replacement, and tirelessly translated all the business rules encoded in the SOUL code into it. And they succeeded functionally - the new system produced the same results as the old one - but the performance was worse by orders of magnitude, and since it was fundamentally single-threaded (time to process a single record - maybe in theory you could parallelize aspects of it but I doubt either old or new system were) - it wasn’t a problem you could solve just by throwing more hardware at it.
Idea I have: keep the SOUL code as-is, and build a SOUL compiler for Linux (e.g. using LLVM). Or even just transpile the SOUL code into C. Totally doable, likely to give similar performance to the original mainframe system… Of course, that wouldn’t solve the problem of “system written in obscure language almost nobody knows any more”, but at least could get it on to a mainstream platform
but… people with the skill set to do this are unlikely to be interested in a government job with a rather limited salary…
And, in most countries, government agencies are strangled by procurement rules which attract firms which are adept at negotiating those rules, even if not always so adept at successfully solving the underlying problem… meanwhile, other firms which might be highly adept at the underlying problem take one look at those rules and think “this isn’t worth it”
Mainframes come from a mindset where degradation in performance is something that requires a scheduled maintenance window. Not just for hardware, but also for software. Compare that to the more modern world of "oh we'll just VACUUM the database in the background". The surrounding ecosystem of software might not even tolerate a rare spurious glitch delaying response an extra second.
That and all the software stacks on them are huge complex custom monsters, and reimplementing the whole thing from scratch on a more common SQL database is not exactly easy, while maintaining data integrity, performance and not being able to pay Silicon Valley salaries.
Certainly. One of the projects I’m working on is just that and building a comprehensive dataset is A LOT of work. For some uses you can make it work with a little bit of realistic data for the actual teste and use simpler mechanisms to generate bulk data that’s there only to work as noise the program will never actually see (but the database logic will need to contend with).
(Many IBM customers likely are piece-by-piece moving to Linux, which runs fine on the zSeries hardware, with the same blazing fast interconnects for enterprise workloads etc. I would expect the migration off the mainframe hardware will happen only after the gradual software rewrite. Give it a decade or five.)
Many of the big existing mainframe customers already have multiple max capacity models and are pushing them to their limits as web and analytics and AI/ML and a bunch of other factors increase the overall amount of workload finding their way to mainframes. IBM wouldn't be making those brand new generations of 4-frame models with a new larger max capacity if there weren't customers buying them.
According to a 2024 Forrester Research report, mainframe use is as large as it's ever been, and expected to continue to grow.
Reasons include (not from the report) hardware reliability and scalability, and the ability to churn through crypto-style math in a fraction of the time/cost of cloud computing.
Report is paywalled, but someone on HN might have a free link.
Whether the Bitcoin bros want to believe it or not, they didn't invent the word "crypto."
Nobody tell them it even pre-dates computers!
Of course we are talking about encryption here. TLS and AES etc etc. Not Bitcoin mining, which would indeed not be very cost effective.
It also has a ton of high availability features and modularity that _does_ fit with legacy workloads and mainframe expectations, so I'm a little unclear who wants to pay for both sets of features in the same package.
I agree that many mainframe workloads are probably not growing so what used to require a whole machine probably fits in a few cores today.
In terms of actual functionality, no matter how good it is, its not price-competitive.
Here's a brochure that might be useful to read:
https://www.ibm.com/downloads/documents/us-en/107a02e95d48f8...
It's an IBM brochure, so naturally it's pumping mainframes, but it still has lots of interesting facts in it.
There's probably some minor strategic relevance here. E.g. for the government which has some workloads (research labs, etc.) that suit these systems, it's probably a decent idea not to try and migrate over to differently-shaped compute just to keep this CPU IP and its dev teams alive at IBM, to make sure that the US stays capable of whipping up high-performance CPU core designs even if Intel/AMD/Apple falter.
It seems clear to me that prior to robust systems for orchestrating across multiple servers that you would install a mainframe to handle massive transactional workloads.
What I can never seem to wrap my head around is if there are still applications out there in typical business settings where a massive machine like this is still a technical requirement of applications/processes or if it's just because the costs of switching are monumental.
I'd love to understand as well!
That's my biggest pet peeve with people that want to ditch mainframes, which is that they seem to care very little about the quality and performance of the software in my experience or they would only be thinking of replacing COBOL and Assembler code with an equivalently performant modern language and dialect. The desire to migrate is often driven primarily to have cheap, easily replaceable developers.
If you have a workload that cannot go down it's going to be more reliable than orchestrating a bunch of cloud servers where you're dealing with the network, hosts going down, failing, or issues with errors in the CPU (yep ... happens at scale).
They also test the hardware to exacting standards - 8.8 magnitude earthquake is one example.
That thing is dreadnought matmul machine with some serious uptime, and can crunch numbers without slowing down or losing availability.
or, possibly, you can implement a massively parallel version of WOPR/Joshua on it and let it rip scenarios for you. Just don't connect to live systems (not entirely joking, though).
P.S.: I'd name that version of the Joshua as JS2/WARP.
Do you have a credit card? Do you bank in the USA? If you answered "yes" to either of the above questions, you interact indirectly with a mainframe.
Edit: Oh yeah, just saw MasterCard has some job posting for IBM Mainframe/COBOL positions. Fascinating.
Yeah, Linux/Unix are way better on both than they used to be, but on a mainframe, it's just a totally different level.
Most firms have so-so software, in need of ultra reliable hardware, not everyone is google
If you understand the benefits of cloud over generic x86 compute, then you understand mainframes.
Cloud is mainframes gone full circle.
Except that now you need to develop the software that gives mainframes their famed reliability yourself. The applications are very different: software developed for cloud always needs to know that part of the system might become unavailable and work around that. A lot of the stack, from the cluster management ensuring a failed node gets replaced and the processes running on them are spun up on another node, all the way up to your code that retries failed operations, needs to be there if you aim for highly reliable apps. With mainframes, you just pretend the computer is perfect and never fails (some exaggeration here, but not much).
Also, reliability is just one aspect - another impressive feature is their observability features. Mainframes used to be the cloud back then and you can trace resource usage with exquisite detail, because we used to bill clients by CPU cycle. Add to that the hardware reliability features built-in (for instance, IBM mainframes have memory in RAID-like arrays).
The cache design in the Z is so different from cloud computing for collaborative job processes.
Instead of having everyone doing telnet, rsh and X Windows connections into the team's development server, we now use ssh and the browser alongside cloud IDEs.
I mean, no one except for banks can afford one, let alone make back on opex or capex, and so we all resort to MySQL on Linux, but isn't the cost the only problem with them?
Banks smaller than the big ~5 in the US cannot afford anything when it comes to IT infrastructure.
I am not aware of a single state/regional bank that wants to have their IBM on premise anymore - at any cost. Most of these customers go through multiple layers of 3rd party indirection and pay one of ~3 gigantic banking service vendors for access to hosted core services on top of IBM.
Despite the wildly ramping complexity of working with 3rd parties, banks still universally prefer this over the idea of rewriting the core software for commodity systems.
Sure, there will be lot of overhead in having tens-hundreds of servers vs single system, but for lots of workloads it is manageable and certainly worth the tradeoff.
Can you replace 25% of your cores without stopping the machine?
> that same Dell can have 3 TB of RAM.
How does it deal with a faulty memory module? Or two? Does it notice the issue before a process crashes?
> z17 has only 25G networking
They have up to 12 IO drawers with 20 slots each. I think the 48 ports you got are on the built-in switch.
If you have all the workloads in virtual machines, and you migrate them to other hosts, stopping a single machine is mostly immaterial.
Take a look at the cache size on the Telum II, or better yet look at a die shot and do some measuring of the cores. Then consider that mainframe workloads are latency sensitive and those workloads tend to need to scale vertically as long as possible.
The goal is not to rent out as many vCPUs as possible (a busines model in which you benefit greatly by having lots and lots of small cores on your chip). The goal for zArch chips is to have the biggest cores possible with as much area used for cache as possible. This is antithetical to maximizing core density, and so you will find that each dual chip module is absolutely enormous, and that each core takes up more area in the zArch chips than in x86_64 chips, and that those chips therefore have significantly less core density.
The end result is likely that the zArch chips are going to have much higher single thread perf. Whereas they will probably get smacked by say a Threadripper on multithreaded workload where you are optimizing for throughout. This is ignoring intricacies about vectorizatiln and what can / can't be accelerated and whether or not you want binary or decimal floating point and other details and is a broad generalization about the two architectural general performance characteristics.
Likewise, the same applies for networking. Mainframe apps are not bottlenecking on bandwidth. They are way less likely to be web servers dishing out media for instance.
I really dislike seeing architectures compared via such frivolous metrics because it demonstrates a big misunderstanding of just how complex modern CPU designs are.
A Rockhopper 4 Express, a z16 without z/OS support (running Linux) was in the mid 6 digits. It's small enough to co-locate on a rack with storage nodes. While z/OS will want IBM storage, Linux is much less picky.
IBM won't release the numbers, but I am sure it can host more workloads in the same chassis than the average 6-digit Dell or Lenovo.
(Where you can save money buying Linux or Java accelerators to run things on for free
The advantage of this model from a business operations standpoint is that you don't have to think about a single piece of hardware related to the mainframe. IBM will come out automagically and fix issues for you when the mainframe phones home about a problem. A majority of the system architecture is designed on purpose to enable seamless servicing of the machine without impacting operations.
I'd rather have a fault-tolerant distributed software system running on commodity hardware, that way there's a plurality of hardware and support vendors to choose from. No lock-in.
These kinds of monsters run under critical environments such as airports, with AS400 or similar terminals being used by secretaries. These kind of workloads, reliability, security, testing, are no joke. At all. This is not your general purpose Unix machine.
But then you'd have to develop it yourself. IBM has been doing just that for 60 years (on the 360 and its descendants).
What if the business demands a certain level of serialized transaction throughput that is incompatible with ideas like paxos?
You will never beat one fast machine at a serialized narrative, and it just so happens that most serious businesses require these semantics.
How much does downtime cost you per hour? What are the consequences if your services become unavailable?
Though the sky is the limit. The typical machine I would order had a list price of about 1 million. Of course no one pays list. Discounts can be pretty substantial depending on how much business you do with IBM or how bad they want to get your business.
It's easier and harder at the same time to buy older hardware. That's half the challenge though because the software is strictly licensed and you pay per MIPS.
Here's a kid who bought a mainframe and then brought it up:
Previous generation machines that came off-lease used to be listed on IBM's web site. You could have a fully-maxed-out previous-generation machine for under $250k. Fifteen years ago I was able to get ballpark pricing for a fully-maxed-out new machine, and it was "over a million, but less than two million, and closer to the low end". That being said, the machines are often leased.
If you go with z/vm or z/vse, the OS and softare is typically sold under terms that are pretty much like normal software, except it varies depending on the capacity level of the machine, which may be less than the physical number of CPUs in the machine, since that is a thing in IBM-land.
If you go for z/os, welcome to the world of metered billing. You're looking at tens of thousands of dollars in MRC just to get started, and if you're running the exact wrong mix of everything, you'll be spending millions just on software each month. There's a whole industry that revolves around managing these expenses. Still less complicated than The Cloud.
But IBM _does_ have their own mainframe emulator, zPDT (z Personal Development Tool), sold to their customers for dev and testing (under the name zD&T -- z Development and Test), and to ISVs under their ISV program. That's what IBM's own developers would be using if they're doing stuff under emulation instead of LPARs on real hardware.
(And IBM's emulator is significantly faster than Hercules, FWIW, but overall less feature-full and lacks all of the support Hercules has for older architectures, more device types, etc.)
There was some of a legal fight between IBM and Turbo Hercules SSA, a company that tried to force IBM to license z/OS to their users. IBM has been holding a grudge ever since (probably at the advice of their legal).
Could they just list prices? Sure. Will they ever do it? No.
(SCNR)
The Talos II:
https://wiki.raptorcs.com/wiki/Talos_II
> EATX form factor > Two POWER9-compatible CPU sockets accepting 4-/8-/18- or 22-core Sforza CPUs
"Entry" level is $5,800 USD.
There won't be a POWER10 version from them because of proprietary bits required
https://www.talospace.com/2023/10/the-next-raptor-openpower-...
> POWER10, however, contained closed firmware for the off-chip OMI DRAM bridge and on-chip PPE I/O processor, which meant that the principled team at Raptor resolutely said no to building POWER10 workstations, even though they wanted to.
https://www.osnews.com/story/137555/ibm-hints-at-power11-hop...
They aren't cheap and they aren't for everyone. But it meets my needs and it puts my money where my mouth is.
Back in the 90s and early 2000s, there were several non-x86 architectures that were more powerful, and even 64 bit long before Intel ever did. The DEC alpha, SPARC, and others. I was also too poor to afford those back then but I remember them fondly.
At some point I was reading e-mail on a 64-bit SGI machine while we waited for the Dell the company ordered for me to arrive.
The day it came in was one of the saddest days of my life.
The microarch is closed and IBM-specific. However, the ISA is open and royalty-free, and the on-chip firmware is open source and you can build it yourself. In this sense it's at least as open as, say, many RISC-V implementations.
> Verilog RTL for OpenSPARC T2 design
> Verification environment for OpenSPARC T2
> Diagnostics tests for OpenSPARC T2
> Scripts and Sun internal tools needed to simulate the design and to do synthesis of the design
> Open source tools needed to simulate the design
https://www.oracle.com/servers/technologies/opensparc-t2-pag...
You can certainly get Power cores with the VHDL and everything; the most notable of these is Microwatt, and IBM even maintains it. There are also A2O and A2I.
That said, I don't think it's reasonable to expect that a company that put R&D money into designing a high performance chip should give away the store. There has to be some incentive. I'm satisfied that I don't have any unexplained or opaque firmware blobs in my POWER9 chips and the ISA and its internal workings are well-documented. That was good enough for the FSF, and it's good enough for me.
There is a deskside POWER10 machine from IBM, that uses their smaller half-rack server, but it has somewhat limited expansion capabilities.
You may be thinking of IBM i (formerly known as AS/400 and i5), which has a completely abstracted instruction set that on modern systems is internally recompiled to Power.
The I/O probably isn't endless networking adaptors, so what is it?
“The IBM z17 supports a PCIe I/O infrastructure. PCIe features are installed in PCIe+ I/O drawers. Up to 12 I/O drawers per IBM z17 can be ordered, which allows for up to 192 PCIe I/O and special purpose features.
For a four CPC drawer system, up to 48 PCIe+ fan-out slots can be populated with fan-out cards for data communications between the CPC drawers and the I/O infrastructure, and for coupling. The multiple channel subsystem (CSS) architecture allows up to six CSSs, each with 256 channels.
The IBM z17 implements PCIe Generation 5 (PCIe+ Gen5), which is used to connect the PCIe Generation 4 (PCIe+ Gen4) dual port fan-out features in the CPC drawers. The I/O infrastructure is designed to reduce processor usage and I/O latency, and provide increased throughput and availability.”
They want you to be able to switch out the old mainframe racks for z17 racks and still stay within the same power budget.
Each CPC drawer can up to 3 I/O drawers for up to 48 x16 hot-swappable PCIe+ slots and a system can have up to four CPC drawers for a total of 192 potential PCIe+ modules.
If you set it up right any process running on any CPC drawer can directly access any PCIe+ device in the system no matter where it is.
If you're super slick, any process running on any CPC drawer in a Z17 that is directly connected to another Z17, even one many miles away, can directly access any PCIe+ device in the remote system. I don't know if anyone is actually doing that. That would be silly.
Doing that requires a lot of stuff, which takes up space.
IBM technology could be as common as Linux and x86 if they didn't overcharge so badly.
a surprising statement. mind to elaborate?
asking because sustained, dependable and high I/O per floor space (combined with high throughput low latency compute) are the core selling points of zSeries