My research conclusion at the time was that, while OpenShift is a great product worthy of consideration, it really only shines in organizations that are heavily invested in microservices or Kubernetes. If you (or more specifically, your vendors) haven’t migrated into that state, it’s not worth it compared to a RHEL server license and their KVM+Cockpit solution for bog standard VMs.
So if you haven’t migrated into that state mentioned but want a Hypervisor that isn’t VMware and have enterprise support.
Openshift Virtualization Engine is kubevirt (aka kvm/libvirt).
Right now they install proxmox-kerne-6.8.12-6 by default (using pseudo-packages called proxmox-default-kernel and proxmox-kernel-6.8 pointing at it), and offer proxmox-kernel-6.11.0-2 as an opt-in package (by installing proxmox-kernel-6.11)
I’ve been using the latest opt-in kernels on all of my Proxmox nodes for a few years now, and I’ve never had any issues at all with that myself.
That's a big gotcha - ZFS is non-free so of course it cannot be part of Debian proper. Hopefully we'll get feature parity via Btrfs or Bcachefs at some point in the future.
ZFS is under the CDDL which is a perfectly good free and open-source software license, just some people view it as incompatible with GPL (IANAL, but this is apparently somewhat controversial; see the wikipedia page) so Debian doesn't distribute ZFS .ko files for Linux in binary form. They do, however, have an official package for it[1], just using DKMS to compile it locally.
[0] https://en.wikipedia.org/wiki/Common_Development_and_Distrib...
Ultimately, I use Proxmox as a hardware hypervisor only, so I don't mind that it uses its own kernel. Everything I run is in its own VM, with its own kernel that is setup the way I want.
I use Proxmox as well in a small-ish deployment, but have also heard good things with Xcp-ng.
At a previous job used OpenStack.
Correction I see now that the projects you reference are Solaris based. I am down with that cause too - but if you are a BSD/Solaris shop expect to do a lot of things on your own. The linux virtualization space is substantially larger (not necessarily suggesting it is better...)
-Large enterprises that previously purchased hardware-with-accompanying-VMware-licenses from OEMs like Dell-EMC: Broadcom refused to even honor pre-acquisition license keys from these sources, leaving many private data centers in the lurch, unless they paid a huge premium for a new Broadcom-originated annual subscription (whereas the original key was one-off)
-Service providers with an ongoing "small-percentage-of revenue per year, payable in arrears" agreement, that were suddenly forced into a "hard vCPU and vRAM limit" subscription, payable for at least 2 years upfront.
However, the magic word for both customer segments is "vMotion", i.e. live-migration of VMs across disparate storage. No OSS and/or commercial (including Hyper-V) solution is able to truly match what VMware could (and can, at the right price) do in that space...
Someone's gonna start working on that soon. Necessity is the mother of invention.
To me, this will be the UNIX wars moment for virtualization.
Originally, UNIX was something AT&T/Bell Labs mainly used for their own purposes. Then people wanted to use it for themselves. AT&T cooked up some insane price (like $20k in 1980s money) for the license for System V. That competed with the BSDs for a while. Then, some nerd in a college office in Finland contributed his kernel to the GNU project. The rest is history.
UNIX itself is somewhat of a niche today, with the vast majority of former use cases absorbed by GNU/Linux.
This feels like an effort by Broadcom to suck up all of the money in the VMWare customer base, thinking it's too much of a pain in the ass to migrate off of their wares. In some circumstances, they're not wrong, but there's going to be teams at companies talking about how to show VMWare the door permanently as a result of this.
Whether Broadcom is right that they can turn a profit on the acquisition with the remaining install base remains to be seen.
https://digitstodollars.com/2022/06/15/what-has-broadcom-bec...
I hate when finance people talk like this.
No, it's not confusing to people in software. We're well aware of your (finance) industry's reputation of sucking capital out of necessary, competitive companies for your own personal gain. If we thought we could get away with it, we'd do something about it.
The large majority of managed languages being used in such scenarios, compiled to native or VM based, have rich ecosystems that abstract the underlying platform.
Moreso, if going deep into serverless, chiseled containers, unikernel style, or similar technologies.
Naturally there is still plenty of room for traditional UNIX style workloads.
The docs says open source can do a live migration, see https://www.linux-kvm.org/page/Migration and https://docs.redhat.com/en/documentation/red_hat_enterprise_...
proxmox is lightyears behind this usecase, and so are most other vendors. Especially if you are building private/public clouds with multi tenancy in mind.
NSX is really well designed and scales nicely, (it even has MPLS/EVPN support for Telecom Service Provider integration).
Most open source and other commercial offerings have solved both the compute and storage aspect quite well. But on the networking front, they a really not comparable.
Proxmox for instance, only supports a vxlan encapsulation or vlans, without support for a proper control plane like EVPN. Heck, route injection by BGP is only doable by DIY'ing it ontop of proxmox.
"just using vlans" is not going to cut if you want to really scale across datacenters and with multiple tenants. NSX does this all really nicely without having to touch the network itself at all thanks to encapsulation and EVPN route discovery.
Since IBM already has OpenShift I'm not sure how much time and effort they want to put into Nomad virtualization, but I'd love it as an alternative to Kubernetes.
Realistically, all the legacy workloads (those that are singletons and can't be load-balanced, need an active GUI session etc) are going to to be problems forever, even if you keep VMWare around.
How is this not contract violation?
Storage vMotion requires a hefty license (as does DVS and the other useful things, such as containers). Proxmox does it all out of the box at a very reasonable price point.
Hell, VMware wont even let you use LLDP until your pissing money out of all orifices. You get CDP only for "free".
After 20+ years of being a VMware fanboi I am migrating all my customers to Proxmox. I've had enough.
Live migration of containers via the CRIU featureset (checkpoint+restore in userspace, which is now part of the mainline Linux kernel) is also an interesting theoretical possibility - AIUI the Kubernetes folks are at least thinking about supporting it. (Live migration fits remarkably well with containers since it requires comprehensive namespacing of all system resources - abstracting away from any dependence on the local machine - which is also how containerization works to begin with.)
VMware used to go a bit further, in that they allowed your compute nodes to fail and/or your storage nodes to fail, without adverse effects.
If Oxide can do that, out-of-the-box, right-now, they'll be having a field day. Otherwise, my reservations about Oxide's business model remain...
The storage service uses OpenZFS for all data storage. This marries Oxide’s distributed data storage and multi-node failure resiliency with the dependability and efficiency OpenZFS has earned in its 20 years of running demanding workloads.
The Oxide control plane monitors performance metrics as another early signal of component failure. As sleds and SSDs are rotated in and out, the Oxide control plane migrates storage regions to ensure the appropriate redundancy.
OpenZFS checksums and scrubs all data for early failure detection. Virtual disks constantly validate the integrity of your data, correcting failures as soon as they are discovered.
No? It's also about load balancing and draining compute/storage resources in preparation for maintenance.
Most pertinently: as long as your alternative doesn't cover any vMotion use-cases, customers will remain 'in talks' with Broadcom...
He/She is thinking about VMware Fault Tolerance