The problem isn't the language, rather how to design the data structures and algorithms for GPUs.
The primitives and pre-coded kernels provided by CUDA (it solves for the most common scenarios first and foremost) is what's holding things back and in order to get those algorithms and data structures down to the hardware level you need something flexible that can talk directly to the hardware.
The pre-coded kernels help a lot, but you don't have to use them necessarly.
I'm a novice in the area, but Chris is well respected in this area and cares a lot of about performance.
Also what NVIDIA is doing has full Windows support, while Mojo support still isn't there, other than having to make use of WSL.
From their license.
It's not obvious what happens when you have >8 users, with one GPU each (typical laptop users).