Additionally, the second and third round of desktop parts released on 10nm (aka "Intel 7") are now known to have pushed clocks and voltages somewhat beyond the limits of the process, leading to embarrassing reliability problems and microcode updates that hurt performance. Intel has squeezed everything they can out of their 10nm and have mostly put it behind them, so talking about it like they only recently ramped production is totally wrong about where they are in the lifecycle.
Whether or not that's a good thing, well, people have their opinions, but they're considered a national security necessity.
(I miss having these kinds of convos on twitter as networkservice ;)
Conventionally this is done in software with a hypervisor which emulates network devices for VMs (virio/vmxnet3, etc...) and does some sort of network encapsulation (vlan, vxlan, etc...). Similar things are done for virtual block storage (virtio blk, nvme, etc..) to attach to remote drives.
If the IaaS clients are high bandwidth or running their own virtualization stack, the infrastructure provider has nowhere to put this software. You can do the infrastructure network and storage isolation on the network switches with extra work but then the termination of the networking and storage has to be done in cooperation with the clients (and you can't trust them to do it right).
Here, the host just sees PCI attached network interfaces and directly attached NVMe devices which pop up as defined by the infrastructure. These cards are the compromise where you let everyone have baremetal but keep your software defined network and storage. In advanced cases you could even dynamically traffic shape bandwidth between network and storage prioritization.