When I looked at the profiler, I was confused to see that one worker thread was at 100% usage the whole time it was running. At first, I thought that maybe it was actually running the code via Wasm on the CPU rather than on the GPU like it said.
Instead, it turns out that the worker was just running `emscripten_futex_wait` - which as far as I can tell is implemented by busy waiting in a loop. Probably doesn't matter for performance since I imagine that's just for the sleep call anyway.
----
Altogether this is an incredibly cool tool. I'm sure there is some performance gap compared to native, but even so this is a extremely impressive and likely has a ton of potential use cases.
Please do feature detection, not browser detection.
``` TypeError: B.values().some is not a function. (In 'B.values().some(r=>r.args.length)', 'B.values().some' is undefined) ```
EDIT: I got the same error with all three sample scripts
Edit: on Nightly `navigator.gpu` is available, I checked that in the console.
I recommend as well reading through the blogpost (repo on [3]): https://lights0123.com/blog/2025/01/07/hip-script/ (many things to improve on the Wasmer side... we have some work to do!)
Could we have PyTorch / ML training with CUDA through the browser performing ok?
You try to run something and Voila you need Ampere or Hopper or Laplace for flash attnt.
Interested to know how debugging in a real application would work since WASM is pretty hard to debug and GPU code is pretty hard to debug. I assume WASM GPU is ... very difficult to debug.
"By chaining chipStar¹ (a HIP and NVIDIA® CUDA® to OpenCL compiler), Clspv² (an OpenCL to Vulkan compiler), and Tint³ (among others, a Vulkan shader to WebGPU shader compiler), you can run CUDA code in the browser!"
¹ https://github.com/CHIP-SPV/chipStar/ ² https://github.com/google/clspv/ ³ https://dawn.googlesource.com/dawn/+/refs/heads/main/src/tin...