> The GPU API in SDL3, by comparison, has chosen the only sane option and just asks you to pass whatever the native shader format is.
I get why their don't do this, to be fair. That would mean they would need to have a proper parser for a half dozen shader models. Or at least try to promise they will make and maintain others soon.
It avoids this entirely. If you're on a system whose GPU driver only speaks DXBC and you hand it SPIR-V, that would be an error. This is what SDL GPU does. The SDL team conveniently made a small library that can cross-compile SPIR-V to the other major bytecode formats that you can integrate into your build pipeline.
So WebGL and WebGPU filter and check anything between webpage and the real hardware.
But admittedly this is not the only major vector for fingerprinting. I would also say that User-agent shouldn't be a header but an autofilliable form input, and that cookies should be some transparently manageable tab in the address bar (and should be renamed to something more comprehensible to the average person like "tokens" or "tickets").
You end up with a strange situation where a company like apple doesn't want to support OpenGL or provide a translation layer in their OS, but they effectively end up doing so in their browser anyways.
But the downside of GLSL I think is that you make the web less "open" because GLSL (or whatever SL) isn't immediately transparent to the user. In the same way we usually expect to open up a webpage and inspect the javascript (because it is typically not minified by convention) whereas the introduction of WASM will require a decompiler to do the same.
The web so far has been a kind of a strange bastion for freedom with adblockers and other types of plugins being able to easily modify webpages. In the future this will be more difficult with web apps as it would amount to decompiling and patching a portable executable (flutter, etc).
Yes, it's true! https://devblogs.microsoft.com/directx/directx-adopting-spir...
They are also slowly but surely porting their DXC compiler (forked from Clang 3.6 I think) to upstream Clang.
Given that Microsoft has also thrown in with SPIR-V and Apple still isn't shipping WebGPU, the next version of WebGPU should tell Apple to fuck off, switch to SPIR-V, and pick up Windows, XBox, and Linux at a stroke.
Tell Apple to fuck off and roll it out--designers will flock to Teh Shiny(tm). When enough designers can't run their glitzy web thing, Apple will cave.
It's also not like advancement has been a priority for tech this decade so far.
To the point most studios would rather use something like streaming, where at least they enjoy the convenience of tooling like RenderDoc, PIX, Instruments, NInsight,....
I think the major advantage of WebGPU over WebGL2/OpenGLES3 is that you can write GPGPU shaders more easily versus OpenGL's Transform Feedback system which is very clunky. But this comes at a tradeoff of compatibility for the time being.
But in the rust ecosystem at least, WebGPU has taken the role of OpenGLES with libraries like wgpu becoming dominant.
All around change for the sake of change.
More like change for the sake of politics, Apple didn't want to use any Khronos IP so the WebGPU committee had to work backwards to justify inventing something new from scratch, despite the feedback from potential users being overwhelmingly against doing that.
Then after sending the spec on a multi-year sidequest to develop a shader language from scratch, Apple still hasn't actually shipped WebGPU in Safari, despite Google managing to ship it across multiple platforms over a year ago. Apple only needs to support Metal.
Do you mean to allege "[the Apple delegates to] the WebGPU committee"? Because the committee as a whole has a ton of public minutes that show how strident the opposition to this was. (Probably filed under "This is not a place of honor" :)) I don't even want to re-read what I said at the time. No one involved, literally no one, is happy about that chapter, believe me. We are happy to be shipping something, though.
This also sadly means that most tools to help navigate these are probably also trapped in some studio codebases. I remember those promises with Vulkan 1.0 where you could potentially just wait until others would make some boilerplate abstraction so other could learn graphics programming before diving deep down into every nitty gritty detail. I haven't looked extensively for that, but nothing came on my radar while navigating through learning Vulkan.
DX12/Vulkan means you do 2000 lines of boilerplate to get anywhere.
WebGPU is actually a nice step back towards OpenGL. You can get off the ground much faster, while still being modern in the way the APIs work.
It's not true anymore, because you have all sorts of secondary rendering (e.g. shadow maps, or pre-passes), as well as temporal accumulation. These all need their own unique shaders. With meshlets and/or nanite, culling becomes a cross-object issue. With deferred rendering, separate materials require careful set up.
So now the idea that a dev can just bring their own shaders to plug into an existing pipeline kind of falls apart. You need a whole layer of infrastructure on top, be it node graphs, shader closures, etc. And dispatch glue to go along with it.
This is all true even with WebGPU where you don't have to deal with synchronization and mutexes. Just a shit show all around tbh. Rendering APIs have not kept up with rendering techniques. The driver devs just threw up their hands and said "look, it's a nightmare to keep up the facade of old-school GL, so why don't you do it instead".
This has not been true since deferred shading became popular around 2008. Shadow maps were around much earlier than that even.
There's a reason the 1:1 draw:object API has fallen out of popularity - it doesn't scale well, be it CPU overhead, lighting, culling and geometry processing, etc.
That said, you of course still can do this if you want to. Draw calls and vertex buffers haven't gone away by any means.
> So now the idea that a dev can just bring their own shaders to plug into an existing pipeline kind of falls apart. You need a whole layer of infrastructure on top, be it node graphs, shader closures, etc. And dispatch glue to go along with it.
That's the job of rendering engines, not graphics APIs. If you want to work at that layer, then you use a rendering/game engine that provides the tooling for technical artists. If you _are_ the rendering/game engine, then you're thankful for the increased level of control modern graphics APIs provide you to be able to realize better looking, higher performing (more stuff is possible), and more flexible tools to provide your tech artists with.
> This is all true even with WebGPU where you don't have to deal with synchronization and mutexes. Just a shit show all around tbh. Rendering APIs have not kept up with rendering techniques. The driver devs just threw up their hands and said "look, it's a nightmare to keep up the facade of old-school GL, so why don't you do it instead".
Users of the drivers got fed up with them being buggy, slow, and limited. The industry's response was to move as much code as possible out of the driver and into user space, exposing more control and low-level details to userspace. That way, you would never be bottlenecked by the driver, be it performance or bugs. The industry has realized time and time again that hardware companies are often bad at software, and it would be better to let third parties handle that aspect.
The real failure of of the graphics industry imo was Vulkan 1.0 trying to cater to old mobile devices and modern desktop devices simultaneously, and much worse, never starting a large community project to communally develop a higher-level graphics API until WebGPU (which itself is underfunded). Even then its higher-level nature is largely a byproduct of wanting to enforce safety on untrusted webapps.
But yes, even WebGPU is still more complicated than OpenGL 2. If you find graphics APIs too much work, you're not their target audience and you should be using a higher level API.
That's a pretty sad state of affairs given the "audience" is shrinking by the day. And then later those graphics programmers leave/get laid off by Unity/Epic/AAA Studio with a custom engine and they wonder why they can't find any DX12/Vulkan engineers to their satisfaction.
For the lifeblood of the industry, tools need to also be learnable by hobbyists. At least, if you don't want to spend 6-12 months training your graphics programmers yourself. The courses I peeked at at my Alma mater (when Vulkan was still brand new) are still using OpenGL 3 to teach, so it doesn't sound like Universities are picking up the slack.
> That's a pretty sad state of affairs given the "audience" is shrinking by the day. And then later those graphics programmers leave/get laid off by Unity/Epic/AAA Studio with a custom engine and they wonder why they can't find any DX12/Vulkan engineers to their satisfaction.
That's more a symptom of how garbage working in the game development industry is, and less about any underlying technology. There's a reason I work on a game engine for fun, as my hobby, and not professionally despite having the option to do so. Everyone I spoke to in the industry talks about how terrible the working conditions are.
A professional graphics developer I recently talked to summed it up well - everyone needs a game engine, but no one wants to pay people to make and maintain one.
I did see that post. It is commendable, but we should also note that that author has 15 years of experience in tech and was already a solo developer as a hobbyist.
It can be easy to forget that there's a lot of cruft and API to grok through for these things, things potentially out of the scope of students and juniors who haven't had to navigate codebases with millions of LoC in various states of disarray. That speaks more to our ability to tolerate the chaos than the learnability of the API.
>I don't think it really makes sense for them to teach how to use Vulkan well or how to make a fast renderer, the details of that often change quickly year by year anyways
From a learners' POV I agree. From the industry's point of view they want someone who can jump into the fray with minimal training. And we both hopefully understand that theory doesn't necessarily correlate to real world experience. So there's some critical bridge that is missing on some side that as of now industry just expects potential programmers to learn in their free time somehow.
Which in and of itself still isn't a trivial matter. Because so much of this knowledge is tribal wisdom carried by industry. So you see where the issues add up. You'll find breadcrumbs here and there scattered across the net, but this is only adding more obstacles for people to hit that bar.
>That's more a symptom of how garbage working in the game development industry is, and less about any underlying technology. There's a reason I work on a game engine for fun, as my hobby, and not professionally despite having the option to do so. Everyone I spoke to in the industry talks about how terrible the working conditions are.
I can concur with that as someone in the industry. But there's not really that many places you can go to work professionally if you're not in games:
- animation renderers (Pixar, Dreamworks, Illumination. maybe Laika), but the reputation in that industry isn't much better
- various research firms that look more for PhD's if anything. Maybe some Masters students. So you're basically in acedemia land (which is known for its lack of pay, even compared to games).
- and of course, the GPU companies. Nvidia, Intel, and AMD among a few others.
It's a very niche field that requires very specialized knowledge. If no one's offering training nor even an above average pay for that, what are you going to do? If left unchecked, these kinds of fields will be the first to suffer the brain drain as pioneers start to retire or die off.
>A professional graphics developer I recently talked to summed it up well - everyone needs a game engine, but no one wants to pay people to make and maintain one.
I'd say that's the 2020's in general, yes. Everyone wants senior+ level workload with the pay of a junior. Meanwhile efficiency is going up and they instead try to pack on more work than ever to "compensate". Something's got to give.
Writing a render graph system (or any reasonably complex renderer that at least does several passes) in Vulkan means juggling with manual synchronization but writing one in WebGPU at least means you pay a little performance to not have to do that. If you want to graduate your renderer from WebGPU to Vulkan/DX12 you can pretty easily do that I imagine. So it front loads the fun and lets you postpone the boring boilerplate somewhat.
Obviously rendering is always going to be about managing dozens of descriptor sets and pipelines and controlling which resources are written and copied when and where. But WebGPU strikes a pretty good balance for complexity I think.
It isn't because of fun that most Web 3D frameworks like Threejs, Babylonjs and PlayCanvas provide their own shading abstractions, three shading languages to target now.
Although it appears to be the next managed 3D API for Android userspace, as communicated at SIGGGRAPH, then again it is better than being stuck with GL ES 3.x as it is now.
So a matter of perspective I guess.