I think WebGPU is a decent wrapper for exposing compute and render in the browser. Not perfect by any means - I've had a few paper cuts working with the API so far - but a lot more discoverable and intuitive than I ever found WebGL and OpenGL.
That's a tiny bit revisionist history. Each new major D3D version (at least before D3D12) also fixes usability warts compared to the previous version with D3D11 probably being the most convenient to use 3D API - while also giving excellent performance.
Metal also definitely has a healthy balance between convenience and low overhead - and more recent Metal versions are an excellent example that a high performance modern 3D API doesn't have to be hard to use, nor require thousands of lines of boilerplate to get a triangle on screen.
OTH, OpenGL has been on a steady usability downward trend since the end of the 1990s, and Vulkan unfortunately had continued this trend (but may steer into the right direction in the future:
I'm not arguing that DevEx doesn't exist in graphics programming. Just that it's second to dots on screen. I also find webgpu to be a lot nicer in terms of DevEx than WebGL.
Wdyt? Still revisionist, or maybe just a slightly different framing of the same pov?
Amen.
IMHO a new major and breaking D3D version is long overdue. There must be plenty of learnings in which areas it was actually worth it to sacrifice ease-of-use for peformance and where it wasn't.
Or maybe something completely radical/ridiculous and make HLSL the new "D3D API" (with some parts of HLSL code running on the CPU, just enough to prepare CPU side data for upload to the GPU).
I don't imagine them pushing for a DirectX 13, only available on Windows 12 onwards kind of thing, as they have done in past.
Either way, I see we will be back to software rendering, although it is actually hardware accelerated.
Metal 4 has moved a lot in the other direction, and now copies a lot of concepts from Vulkan.
https://developer.apple.com/documentation/metal/understandin...
https://developer.apple.com/documentation/metal/resource-syn...
That has been the main pain point of Khronos APIs, it isn't only the extension spaghetti, the first step is always to go fishing all the puzzle pieces to have a proper development experience.
At least now there is LunarG SDK, however for how long are they going to sponsor them, and it isn't applicable to Android, where Google does the minimum, a Github repo dump with samples and good luck.
Compare that with Apple Metal frameworks.
Technically true, but practically tone deaf.
WebGPU is both years too late, and just a bit early. Wheras WebGL was OpenGL circa 2005, WebGPU is native graphics circa 2015. It shouldn't need to be said that the bleeding edge new standard for web graphics shouldn't be both 10 years out of date and awful.
Vendors are finally starting to deprecate the old binding model as the byzantine machinery that it is. Bindless resources are an absolute necessity for the modern style of rendering with nanite and raytracing.
Rust's WGPU on native supports some of this, but WebGPU itself doesn't.
It's only intuitive if you don't realize just how huge the gap is between dispatching a vertex shader to render some triangles, and actually producing a lit, shaded and occlusioned image with PBR, indirect lighting, antialiasing and postfx. Would you like to render high quality lines or points? Sorry, it's not been a priority to make that simple. Better go study up on SDFs and beziers.
Which, tbh, is the impression I get from webgpu efforts. Everyone forgets the drivers have been playing pretend for decades, and very few have actually done the homework. Of those that have, most are too enamored with being a l33t gfx coder to realize how terrible the dev exp is.
I've never impl PBF or raytracing because my interests haven't gone that way. I don't find SDFs to be a particularly difficult concept to "study up on" either though. It's about as close to math-as-drawing that I've seen and doesn't require much more than a couple triangles and a fragment shader. By contrast I've been learning about SVT for a couple months and still haven't quite pieced together a working impl in webgpu... though I understand there are extensions specifically in support of virtual tiling that WebGPU could pursue in a future version.
Agreed DevEx broadly isn't great when working on graphics. But WebGPU feels like a considerable improvement rather than a step backward.
The problem is that GPU hardware is rapidly changing to enable easier development while still having low level control. With ReBAR for example you can just take a pointer into gigabytes of GPU memory and pump data into it as if it was plain old RAM with barely any performance loss. 100 lines of bullshit suddenly turn into a one line memcpy.
Vulkan is changing to support all this stuff, but the Vulkan API was (a) designed when it didn't exist and is (b) fucking awful. I know that might be a hot take, and I'm still going to use it for serious projects because there's nothing better right now, but the same extensibility that makes it possible for Vulkan to just pivot huge parts of the API to support new stuff also makes it dogshit to use day to day, the code patterns are terrible and it feels like you're constantly compromising on readability at every turn because there is simply zero good options for how to format your code.
WebGPU doesn't have those problems, I quite liked it as an API. But it's based on a snapshot of these other APIs right at the moment before all this work has been done to simplify graphics programming as a whole. And trying to bolt new stuff onto WebGPU in the same way Vulkan is doing is going to end up turning WebGPU into a bloated pile of crap right alongside it.
If you're coming from WebGL, WebGPU is going to feel like an upgrade (or at least it did for me). But now that I've seen a taste of the future I'm pretty sure WebGPU is dead on arrival, it just had horrendous timing, took too long to develop, and now it's backed into a corner. And in the same vein, I don't think extending Vulkan is the way forward, it feels like a pretty big shift is happening right now and IMO that really should involve overhauls at the software/library level too. I don't have experience with DX12 or Metal but I wouldn't be surprised if all 3 go bye bye soon and get replaced with something new that is way simpler to develop with and reflects the current state of hardware and driver capabilities.
You get to design a good developer experience, while the plugin system takes care of the optimal API and configuration for each platform.
And the new shading language is so annoying to write it basically has to be generated. Weird shader compilation stuff was already one of the biggest headaches in graphics. Feels like it'll be decades before it'll all be stable.
Hence why NVidia's slang offer was welcomed with open arms.
I think this is a tad unfair. You're basically describing a semi-robust renderer at that point. IMO to make implementing such a renderer truly "intuitive" (I don't know what this word means to you, so I'm taking it to mean--offloading these features to the API itself) would require railroading the developer some, which appears to go against the design of modern graphics APIs.
I think Unity/Unreal/Godot/Bevy make more sense if you're trying to quickly iterate such features. But even then, you may have to hand write the shader code yourself.
> Bindless resources are an absolute necessity for the modern style of rendering with nanite and raytracing.
Yeah, for real. Looking at the November 2024 post "What's next for WebGPU" and HN comments, bindless is pretty high up there! There's a high level field survey & very basic proposal (in the hackmd link), and wgpu seems to be filling in the many gaps and seemingly quite far along in implementation. Not seeing any signs yet that the broader WebGPU implementors/spec folks are involved or following along, but at least wgpu is very cross platform & well regarded.
https://developer.chrome.com/blog/next-for-webgpu https://news.ycombinator.com/item?id=42209272 https://hackmd.io/PCwnjLyVSqmLfTRSqH0viA https://hackmd.io/@cwfitzgerald/wgpu-bindless https://github.com/gfx-rs/wgpu/issues/3637 https://github.com/gpuweb/gpuweb/issues/380
> Would you like to render high quality lines or points? Sorry, it's not been a priority to make that simple. Better go study up on SDFs and beziers.
I realize lines and font rendering are an insanely complex fields, and that OpenGL offering at least lines and Vulkan not sure feels like a slap in the face. The work being done by groups like https://linebender.org/ is intense. Overall though that intensity makes me question the logic of trying to include it, wonders whether fighting to specify something that clearly we don't have full mastery over makes sense: even the very best folks are still improving the craft. We could specify an API without specifying an exact implementation, without conformance tests, perhaps, but that feels like a different risk. Maybe having to reach for a library that does the work reflects where we are, causes the iteration & development we sort of need?
> actually producing a lit, shaded and occlusioned image with PBR, indirect lighting, antialiasing and postfx
I admit to envying the ambition to make this simple, to have such a great deep knowledge as Steve and to think such hard things possible.
I really really am so thankful and hope funding can continue for the incredibly hard work of developing webgpu specs & implementations, and wgpu. As @animats chimes in in the HN submission, bindless in particular is quite a crisis, which either will enable the web to go forward, or remain a lasting real barrier to the web's growth. Really seems to be the tension of Steve's opening position:
> WebGPU is both years too late, and just a bit early. Wheras WebGL was OpenGL circa 2005, WebGPU is native graphics circa 2015.
WebGPU does have line (and point) primitives since they are a direct GPU feature.
It just doesn't bother to 'emulate' lines or points that are wider than 1 pixel, since this is not commonly supported in modern native 3D APIs. Drawing thick lines and points are better done by a high level vector drawing API.
As for true portability of those low-level APIs, you've basically got Apple to blame (and game console manufacturers, but I don't think anyone expected them to cooperate).
Yeah, that's the thing that really irks me. WebGPU could have been just a light wrapper over Vulkan like WebGL is (or was, it's complicated now) for OpenGL. But apple has been on a dumb war with Khronos for the last decade which has made everything more difficult.
So now we have n+1 low level standards for GPU programming not because we needed them, but because 1 major player is obstinate.
Being simpler is an advantage. It means that 3rd party GPU drivers can more simply implement the interface correctly.
However, as discussed in other comments, that doesn't change the driver quality mess of the platform.
How is Apple solely to blame when there are multiple parties involved ? They went to Khronos to turn AMD’s mantle into a true unified next gen APi. Khronos and NVIDIA shot them down to further AZDO OpenGL. Therefore Metal came to be and then DX12 followed and then Vulkan when Khronos realized they had to move that way.
But even if you exclude Metal, what about Microsoft and D3D? Also similarly non-portable. Yet it’s the primary API in use for non-console graphics. You rarely see people complaining about the portability of DX for some reason…
And then in an extremely distant last place is Vulkan. Very few graphics apps actually use Vulkan directly.
Have you tried writing any of the graphics APIs?
Basically, people are mad that you need to buy Apple hardware, use Apple software (macOS), Apple tooling (Xcode), just to develop graphics code for iOS and macOS. At least you don't also need to use Apple language (Swift) to use Metal, though I don't have any first-hand experience with their C++ bindings so I can't judge if it's a painful experience or not.
It's definitely more convenient than Mac because it is provided by the driver and so you can almost always guarantee they exist, but Microsoft themselves do not provide them. On Mac, for Vulkan you can use MoltenVK which is also third party, and bundle it in the app, though definitely less convenient and less fully featured.
Regarding Xbox, that's a bit of an odd point because you might as well include iOS as a platform at that point which is a bigger gaming platform than Xbox. At least iOS uses the same Metal as Mac, while Xbox does vary in some ways from Windows. Granted, iOS gaming is much more casual oriented but there are some AAA games as well.
Regarding Swift, Metal has always been ObjC first not swift first. The C++ bindings are just for convenience, but you've never been bound to Swift even before they existed. Regarding Xcode, that's only to get the toolchain or if you need instrumentation. You don't need to use Xcode to actually develop things, this is no more a burden than needing Visual Studio on Windows.
Operating systems do not implement graphics APIs for GPUs. These are created by the GPU manufacturer themselves (AMD, Nvidia, etc.). This includes DirectX drivers, both user-space and kernel-space drivers.
Graphics APIs like DirectX and Vulkan are better thought of as (1) a formal specification for GPU behavior, combined with (2) a small runtime. The actual DX/VK drivers are thin shims around a GPU manufacturer's own driver API.
For AMD, the DirectX / Vulkan / OpenGL graphics drivers share a common layer called "PAL" which AMD has open sourced: <https://github.com/GPUOpen-Drivers/pal>
Apple really isn't that different here: they leave the graphics manufacturers to implement their own drivers. Unfortunately, Apple is the sole graphics manufacturer for their OS, and they've chosen to only implement Metal drivers for their GPUs (and a legacy OpenGL driver too).
It's not that big of a deal though, because Vulkan is supported on macOS through the MetalVK project, which wrap the Vulkan API around the Metal API. And projects like vkd3d wrap the DirectX 12 API around the Vulkan API, which is then wrapped around the Metal API. This is how you're able to run Windows games on Mac via the Game Porting Toolkit or CrossOver, btw.
And you don't install Khronos stuff on console devkits.
Although as usual, NVidia tends to be the exception.
FWIW, the OpenGL 'driver' on macOS and iOS is just an layer on top of Metal for many years now - e.g. same thing as ANGLE, DXVK or MoltenVk, just maintained directly by Apple.
The important difference both of D3D and Metal compared to Vulkan is that Vulkan lacks steering and vision (e.g. it's not a technical problem at all but a cultural/organizational).
Vulkan development looks like GPU vendors just come up with random Vulkan extensions which then from time to time are blessed by Khronos and elevated into the core API.
This "throwing shit at the wall and see what sticks" approach was already the biggest problem in GL (and my naive younger self thought that this obvious problem would be fixed with Vulkan - alas it turned out that this was the one thing that Khronos didn't change). This is really not how a 3D API should be designed.
Mesa has entered the chat. Granted that with the amount of functionality that's stuffed into opaque firmware blobs these days you can make a reasonable argument that a nontrivial portion of any API is ultimately implemented by the firmware authors.
It has a plugglable driver system, leftover from the Windows NT/OpenGL 1.1 days called ICD, that driver vendors use to add their OpenGL and Vulkan drivers.
https://learn.microsoft.com/en-us/windows-hardware/drivers/d...
In some subsystems like UWP, or Windows on ARM, ICDs aren't supported, and OpenGL/Vulkan have to be mapped on top of DirectX.
https://devblogs.microsoft.com/directx/announcing-the-opencl...
GLon12 is still used for OpenGL however
> Along with a kernel-mode display driver, graphics hardware vendors must also write a user-mode display driver (UMD) for their display adapters. The UMD is a dynamic-link library (DLL) that the Direct3D runtime loads.
https://learn.microsoft.com/en-us/windows-hardware/drivers/d...
You say this requires reinvention but really the end work is "translate OpenGL to something the hardware can actually understand" in both scenarios. The difference with the OpenGL era is you did not have the option to avoid using the wrapper, not that no wrapper existed. Targeting the best of each possibly hardware type individually without baking in assumptions about the hardware has proven to not be very practical, but it only matters if you're building a "easy translation layer" rather than using it or trying to target specific types of hardware very directly (in which case you don't want something super generic or simple, you want something which exposes the hardware as directly as is reasonable for that hardware type).
Apart from that, D3D11 and Metalv1 are probably the sweet spot between ease-of-use and performance (especially D3D11's performance is hard to beat even in Vulkan and D3D12).
If only the windows team could get out of a tailspin because almost everything else MS produces on the Windows side gets worse and worse every year.
Nintendo after graduating to devkits where C and C++ could be used like N64, had OpenGL inspired APIs, which isn't really the same. Although there was some GLSL like shader support.
They only started supporting Khronos APIs with the Switch, and even then, if you want the full power of the Switch, NVN is the way to go.
Playstation always had proprietary APIs, they did a small stint OpenGL ES 1.0 + Cg, which had very little to no uptake among developers, and they dropped it from the devkits.
Sega only had proprietary APIs, and there was a small collaboration with Microsoft for DirectX, which only a few studios took advantage of.
XBox naturally has always been about DirectX.
Go watch GDC Vault programming track to see how many developers you will find complaining about writing middleware for their game engines, if any at all, versus how many talks about taking the advantage of every little low level detail of hardware architecture.
OpenGL didn't match the hardware well except on SGI hardware or carryover descendants like 3dfx.
Vulkan works approximately everywhere (except Apple, but that's entirely self inflicted and there's a compatibility layer so it's NotMyProblem). OpenGL is more portable than ever thanks to software implementations that yield far more consistent behavior between platforms than was available historically. WebGPU is actually fairly nice to work with, has a well maintained native implementation for two major systems languages, and both of those implementations have (AFAIK) fully functional WASM support. If it happens to gain a native Mesa implementation once everything stabilizes that will merely be icing on the cake. OpenCL has multiple competing implementations, including PoCL which is an adapter providing decently broad support on top of other backends.
And if you don't want to fiddle with native APIs (which no offense intended but you very clearly sound like you don't) there's quite a few choices available to abstract all the low level details away with cross platform cross API middleware which are FOSS and actively maintained.
There are no adults, no leaders with an eye on things leading us away from further mistakes, and we keep going deeper.
But even when it existed in the form of OpenGL , or now WebGPU, people complain about the performance overhead. So you end up back here.
And there are so many pointless things that are no longer relevant, or should at best be optional so that devs can get things done before optimizing.
Yes they’re abstractions, because nobody really wants anyone to be writing directly against the ISA either since the vendors need the ability to change things over time.
Again, to my point, it’s about balancing portability and power/perf.
Personally, I'll sit this generation out and wait dor whatever comes after. I ended up switching to doing software rasterization in Cuda because that's easier than drawing a triangle in Vulkan. Cuda has shown me how insane Vulkan is. Like, why even have descriptir sets, bindings, etc? In cuda you simply call a kernel and provide the data (e.g. vertex or storage buffer) as a pointer argument.
If I am playing with something on native applications, definitely an engine, with rendering plugins.
OTOY does their rendering with CUDA by the way.
So who is the graphics hardware built for? Again, not the consumer and not the game developer.
It is in the interests of these hardware manufacturers to make performance as easy as possible but none of them do. They write their own drivers which implement DirectX 12 or Vulcan or Metal or OpenGL.
So now as a game developer, if I want my game to perform on all platforms, I have to write my shaders natively for Metal, Vulcan, and DirectX 12, at least. Cross-compilers exist but they don’t do their job as well as a human can, so they’re simply not options for some.
All of this is harder for no good reason. And no one cares. No one wants to see things improve. They just make excuses for the hardware manufacturers and kill conversations which explain how things currently suck for a lot of people.
They are a specialized API intended for tool writers.
I'll use it for web since there is no alternative, but for desktop I'll stick with an OpenGL+CUDA interop framework until a sane, modern graphics API shows up. I.e., a graphics API that gets rid of render pases, static pipelines, mandatory explizit syncing, bindings and descriptor sets (simply use buffers and pointers), and all the other nonsense.
If allocating and populating a buffer takes more effort than a simple cuMemAlloc and cuMemcpy, and calling a shader with arguments takes more than simply passing the shader pointers to the data, then I'm out.
They'd do well to follow the D3D model (major breaking versions, while guaranteeing backward compatibility for older versions) - e.g. WebGPU2, WebGPU3, WebGPU4 each being a mostly new API without having to compromise for backward compatibility.
I think that's the price to pay for trying to cover a wide range of hardware. You can't just make all those shitty Android phones disappear. At least for each WebGPU limit, there's usually a Github ticket which explains why exactly this limit exists.
Both WebGL2 and WebGPU are probably the most 'watertight' specced and tested 3D API ever built, and especially WebGPU has gone to great lengths to eliminate UB present in native APIs (even at the cost of usability).
We only need to open chrome://gpu and see how many workarounds are implementated.
Those that happen to own a device where workarounds are yet to be implemented, have quite interesting experiences, depending on the root cause.
As this is an increasing list across Chrome releases.
Lets see how it works out there with Firefox and Safari, the later still not fully WebGL 2.0 compliant.
So much for the watertightness.
What I really would like to see is browser vendors finally providing WebGL and WebGPU debugging tools.
I think a decade has been more than enough for that.
Then again, no one is paying for browsers, so I guess I should not complain.
My company is working to bring Unreal to the browser, and we've built out a custom WebGPU RHI for Unreal Engine 5.
Here are demos of the tech in action, for anyone interested:
(Will only work on Chromium-based browsers on desktop, and on some Android phones)
Cropout: https://play-dev.simplystream.com/?token=aa91857c-ab14-4c24-...
Car configurator: https://garage.cjponyparts.com/
This post is about WebGPU in Firefox. Do you plan to test and/or release a Firefox-compatible version?
Cropout: After being stuck at 0% for a long while and 1200 network requests, it loads to a menu with a black background and will start a game but only UI elements show up. Seems to have a lot of errors parsing shaders, as well as a few other miscellaneous errors.
Car configurator: Several errors while at 0% (never loads), the first among them being `[223402304]: MessageBox type 0 Caption Message Text Game files required to initialize the global shader and cooked content are most likely missing. Refer to Engine log for details.`
I would concur with others that you should at least test this in Firefox before advertising it here.
To be good on the web requires designing your game to start immediately with the minimal amount of downloaded. Maybe stream some stuff in the background but be playable immediately. AFAICT neither Unreal nor Unity do that by default. You can maybe coerce them to do it but most devs don't. As such they get these bad experiences when they try to put their creation on the web
If it does crash, you'll be able to see why. I'd be interested in seeing any bug reports if you do fine some, we're always squashing bugs over here!
Are we supposed to try them out on the same kind of high end gamer desktop setup requirements for the native version?
I just installed the Mac nightly from https://www.mozilla.org/en-US/firefox/channel/desktop/ and now this demo works: https://huggingface.co/spaces/reach-vb/github-issue-generato...
It runs the SmolLM2 model compiled to WebAssembly for structured data extraction. I previously thought that demo only worked in Chrome.
(If I try it in regular Firefox for Mac I get "Error: WebGPU is not supported in your current environment, but it is necessary to run the WebLLM engine.")
> Although Firefox 141 enables WebGPU only on Windows, we plan to ship WebGPU on Mac and Linux in the coming months, and finally on Android.
Sounds good. I'm not really thrilled about it as of now. What ever the reason, it's not been supported in Linux for any browsers as of yet. My guess is it's too hard to expose without creating terrible attack surfaces.
This seems to support my view that web standards are too overgrown for how users actually use the web. It's obviously too late to do anything about it now but all the issues of monoculture and funding we are worried about today stem from the complexity of making a web browser due to decisions tracing all the way back to the days of Netscape.
However it kind of proves the point on how relevant browser vendors see GNU/Linux for this kind of workloads.
Gaussian splatting training and rendering using webgpu
1. https://boat-demo.cds.unity3d.com/
It also works without WebGPU, just very slowly.
I was feeling a bit dirty playing around with WebGPU with only Chrome into the game thus far, even Safari has enabled their preview quite recently.
It is available on Android/Linux, WebOS/Linux and ChromeOS/Linux.
Which tells where they see the ..../Linux value for WebGPU.
As I also depend on the wgpu-native bindings it's slow for updates to reach. Like we just got to v25 last week and v26 dropped a couple days prior to that.
- Visualize other scan data such as gaussian splat data sets, or triangle meshes from photogrammetry
- Things like google earth, Cesium, or other 3D globe viewers.
It's a pretty big thing in geospatial sciences and industry.
For gaussian splatting, WebGPU is great since it allows implementing sorting via compute shaders. WebGL-based implementations sort on the CPU, which means "correct" front-to-back blending lags behind for a few frames.
But yeah, when you ask like that, it would have been much better if they had simply added compute shaders to WebGL, because other than that there really is no point in WebGPU.
https://registry.khronos.org/webgl/specs/latest/2.0-compute/
https://github.com/9ballsyndrome/WebGL_Compute_shader/issues...
While I would have designed a few things differently in WebGPU (especially around the binding model), it's still a much better API than WebGL2 from every angle.
The limited feature set of WebGPU is mostly to blame on Vulkan 1.0 drivers on Android devices I guess, but there's no realistic way to design a web 3D API and ignore shitty Android phones unfortunately.
The one thing that WebGPU is doing better is that it does implicit syncing by default. The problem is, it provides no options for explicit syncing.
I mainly software-rasterize everything in Cuda nowadays, which makes the complexity of graphics apis appear insane. Cuda allows you to get things done simple and easily, but it still has all the functionaility to make things fast and powerful. The important part is that the latter is optinal, so you can get things done quickly, and still make them fast.
In cuda, allocating a buffer and filling it with data is a simple cuMemAlloc and cuMemcpy. When calling a shader/kernel, I dont need bindings and descriptors, I simply pass a pointer to the data. Why would I need that anyway, the shader/kernel knows all about the data, the host doesnt need to know.
AFAIK Vulkan only eliminated pre-baked render pass objects (which were indeed pointless), and now simply copied Metal's design of transient render passes, e.g. there's still 'render pass boundaries' between vkCmdBeginRendering() and vkCmdEndRendering() and the VkRenderingInfo struct that's passed into the vkCmdBeginRendering() function (https://registry.khronos.org/vulkan/specs/latest/man/html/Vk...) is equivalent with Metal's MTLRenderPassDescriptor (https://developer.apple.com/documentation/metal/mtlrenderpas...).
E.g. even modern Vulkan still has render passes, they just didn't want to call those new functions 'Begin/EndRenderPass' for some reason ;) AFAIK the idea of render pass boundaries is quite essential for tiler GPUs.
WebGPU pretty much tries to copy Metal's render pass approach as much as possible (e.g. it doesn't have pre-baked pass objects like Vulkan 1.0).
> The one thing that WebGPU is doing better is that it does implicit syncing by default.
AFAIK also mostly thanks to the 'transient render pass model'.
> Why would I need that anyway, the shader/kernel knows all about the data, the host doesnt need to know.
Because old GPUs are a thing and those usually don't have such a flexible hardware design to make rasterizing (or even vertex pulling) in compute shaders performant enough to compete with the traditional render pipeline.
> Similarly static binding groups are entirely pointless
I agree, but AFAIK Vulkan's 1.0 descriptor model is mostly to blame for the inflexible BindGroups design.
> but that's also made needlessly cumbersome in WebGPU due to the requirement to use staging buffers
Most modern 3D APIs also switched to staging buffers though, and I guess there's not much choice if you don't have unified memory.
I've been told by a driver dev of a tiler GPU that they are, in fact, not essential. They pick that info up by themselves by analyzing the command buffer.
Well I wouldn't know since I switched to using Cuda as a graphics API. It's mostly nonsense-free, and faster than the hardware pipeline for points, and about as fast for splats. Seeing how Nanite also software-rasterizes as a performance improvement, Cuda may even be great for triangles. Only implemented a rudimentary triangle rasterizer that can draw 10 million small textured triangles per millisecond. Still working on the larger ones, but low-priority since I focus on point clouds.
In any case, I won't touch graphics APIs anymore until they make a clean break to remove the legacy nonsense. Allocating buffers should be a single line, providing data to shaders should be as simple as passing pointers, etc..
> Are we sure sites are not just going to use it to mine bitcoins using their users' hardware?
Some almost certainly will but like all similar issues the game of cat and mouse will continue.
https://en.wikipedia.org/wiki/Infinity_Blade
Game demo, https://www.youtube.com/watch?v=_w2CXudqc6c
The only thing I like in Web 3D APIs, is that outside middleware engines, they are the only mainstream 3D APIs designed with managed languages in mind, instead of after the fact bindings.
Still waiting for something like RenderDoc on the respective browser developer tools, we never got anything better than SpectorJS.
It isn't even printf debugging, rather pixel colour debugging.
Hence one of the reasons why it never took off as Flash replacement, and indies rather focused on native mobile games.
It is hard to sell an experience, when there is zero control over the hardware acceleration.
And the SWF format had insane compatibility, literally unmatched by any other technology imo, we didn't even think about OS's, it really was "write once run anywhere" (pre-smartphone ofc). On the web, even basic CSS doesn't work the same from OS to OS, and WebGL apps still crash on 10% of devices randomly. It'll probably be 5 years before WebGPU is even remotely stable.
Not even to mention the fully integrated editor environment.
Or I guess maybe you're saying someone should build something like Flash targeting WebGPU? Probably the closest there is to that right now is Figma? But it feels weak too imo, and was already possible with WebGL. Maybe Unreal Engine is the bet.
Consequently much of the JS 3D community has become obsessed with gaussian splatting, and AR more generally.
[1] And I would extend this to what's going on here: people prefer complaining about how missing features in APIs prevent their genius idea from being possible, when in truth there's simply no demand from users for this stuff at all. You could absolutely have done web Minecraft years ago, and it's very revealing such a thing is not wildly popular. I personally wasted too long on WebGL ( https://www.luduxia.com/ ), and what I learned is the moment it all works people just assume it was nothing and move on.
Minecraft started as a java applet in the browser, that's part of the reason it was able to gain such a rapid following.
Driver and OS blacklisting, means that game developers aren't aware of the user experience, nor can they controll it, as in native games, or server side rendering with streaming.
No proper debugging tools other than printf/pixel debugging.
The amount of loading screens that would be needed, given memory constraints of browser sessions.
This alone means there is hardly that much ROI for 3D webgames, and most uses end up being in ecommerce, or Google Maps kind of applications.
There is a tracking issue[1], although I am not sure how much of that makes it to the browser.
This might still be a semi-legitimate thing, i.e maybe they kept around a WebGL implementation for a while as a fallback but moved the main implementation to WebGPU and don't want to maintain the fallback. It certainly fits well into their strategy of making sure that the web really only works properly with Chrome.