Also what NVIDIA is doing has full Windows support, while Mojo support still isn't there, other than having to make use of WSL.
Naturally assuming they are using laptops with NVidia GPUs.
The problem isn't the language, rather how to design the data structures and algorithms for GPUs.
Basically, imagine if you can target Cuda, but you don't have to do too much for your inference to also work on other GPU Vendors e.g AMD, Intel, Apple. All with performance matching or surpassing what the hardware vendors themselves can come up with.
Mojo comes into the picture because you can program Max with it, create custom kernels that is JIT compiled to the right vendor code at rumtime.
The primitives and pre-coded kernels provided by CUDA (it solves for the most common scenarios first and foremost) is what's holding things back and in order to get those algorithms and data structures down to the hardware level you need something flexible that can talk directly to the hardware.
The pre-coded kernels help a lot, but you don't have to use them necessarly.
I'm a novice in the area, but Chris is well respected in this area and cares a lot of about performance.
From their license.
It's not obvious what happens when you have >8 users, with one GPU each (typical laptop users).