The finding that naive single-op benchmarks overestimate dispatch cost by ~20x is wild. Curious how much the torch-webgpu backend could close the gap with CUDA if you went aggressive on kernel fusion, 53% improvement on Vulkan already is significant. Any plans to try wgsl-level custom kernels?
Honestly there is a lot for room of improvement in torch-webgpu for performance. Needs involvement of community but the opportunities are definitely there