40 pointsby rossant4 days ago6 comments
  • davexunit3 days ago
    WGSL is easily the worst part of WebGPU, which I like overall. The world did not need another high-level shading language. They could have just used SPIR-V bytecode if it wasn't for Apple sabotage. The GPU API in SDL3, by comparison, has chosen the only sane option and just asks you to pass whatever the native shader format is.
    • johnnyanmac3 days ago
      Why am I not surprised that it's Apple that continues to make the world of graphics programming even more difficult than it needs to be.

      > The GPU API in SDL3, by comparison, has chosen the only sane option and just asks you to pass whatever the native shader format is.

      I get why their don't do this, to be fair. That would mean they would need to have a proper parser for a half dozen shader models. Or at least try to promise they will make and maintain others soon.

      • davexunit3 days ago
        > That would mean they would need to have a proper parser for a half dozen shader models.

        It avoids this entirely. If you're on a system whose GPU driver only speaks DXBC and you hand it SPIR-V, that would be an error. This is what SDL GPU does. The SDL team conveniently made a small library that can cross-compile SPIR-V to the other major bytecode formats that you can integrate into your build pipeline.

        • LinAGKar3 days ago
          That obviously wouldn't work for the web though, since it would make webpages OS-specific
          • vetinari3 days ago
            Not only that; passtrough to real gpu hardware on the web is a quick way to get 0wned. The GPU drivers - and a bunch of hardware too - are not robust enough to be exposed this way.

            So WebGL and WebGPU filter and check anything between webpage and the real hardware.

            • beeflet3 days ago
              WebGL and WebGPU are still major vectors for device fingerprinting. In an ideal world, GPU access would result in a browser popup like location services and notifications currently do.

              But admittedly this is not the only major vector for fingerprinting. I would also say that User-agent shouldn't be a header but an autofilliable form input, and that cookies should be some transparently manageable tab in the address bar (and should be renamed to something more comprehensible to the average person like "tokens" or "tickets").

          • beeflet3 days ago
            WebGL sort of makes webpages API-specific already by pretty much forcing them to implement OpenGLES. I think if anything SPIR-V is less imposing on the OS because you don't have to implement the whole GLSL front of the compiler and you can just deal with the intermediary.

            You end up with a strange situation where a company like apple doesn't want to support OpenGL or provide a translation layer in their OS, but they effectively end up doing so in their browser anyways.

            But the downside of GLSL I think is that you make the web less "open" because GLSL (or whatever SL) isn't immediately transparent to the user. In the same way we usually expect to open up a webpage and inspect the javascript (because it is typically not minified by convention) whereas the introduction of WASM will require a decompiler to do the same.

            The web so far has been a kind of a strange bastion for freedom with adblockers and other types of plugins being able to easily modify webpages. In the future this will be more difficult with web apps as it would amount to decompiling and patching a portable executable (flutter, etc).

          • davexunit3 days ago
            If only there were a shader IR that was made with portability in mind!
      • mwkaufma3 days ago
        SDL3 offers a spir-v-cross reference impl as an optional auxiliary library.
        • modeless3 days ago
          Also DirectX is adopting SPIR-V, so that will help in the future as well.

          Yes, it's true! https://devblogs.microsoft.com/directx/directx-adopting-spir...

          • NekkoDroid3 days ago
            DXC can already compile to SPIR-V (with some missing features/limitations IIRC), this just effectivly replaces DXIL with SPIR-V in future shader models and makes it THE language the compiler speaks.

            They are also slowly but surely porting their DXC compiler (forked from Clang 3.6 I think) to upstream Clang.

            • vrighter2 days ago
              It's not a question of whether it can compile to spir-v, but a question of whether dx12 can consume spir-v
      • dan-robertson3 days ago
        I think apple contributed lots of things that make wgpu a nicer api for people. I’m not saying this particular contribution was good, just that it’s not so black and white.
        • davexunit3 days ago
          Apple and all the big players bring a lot of engineering knowledge to the table, but they also indulge in corporate sabotage on a regular basis.
    • bsder3 days ago
      WebGPU was a total surrender to Apple, and Apple still didn't implement it.

      Given that Microsoft has also thrown in with SPIR-V and Apple still isn't shipping WebGPU, the next version of WebGPU should tell Apple to fuck off, switch to SPIR-V, and pick up Windows, XBox, and Linux at a stroke.

      • modeless3 days ago
        Apple is implementing it. They are just slow. There's no point in a web standard that Apple won't implement as long as they hold a monopoly on iOS browser engines.
        • bsder3 days ago
          Just like Microsoft was slow about web standards in IE6. <rolls eyes>

          Tell Apple to fuck off and roll it out--designers will flock to Teh Shiny(tm). When enough designers can't run their glitzy web thing, Apple will cave.

          • vetinari3 days ago
            Google and Mozilla are also pretty slow. Google still doesn't support it on more than some SoCs on their own Android, leaving the bulk of the market unsupported, never mind Linux. Mozilla also got lost somewhere.
            • jdashg3 days ago
              It is not easy to write a safe and robust intermediate graphics driver from scratch that is secure and runs everywhere, even as the spec continues to change in response to implementation experience.
            • johnnyanmac3 days ago
              Yeah, they weren't much faster with adopting WebGL nor Vulkan nor OpenGL ES either (even if it did indeed suck). Something tells me the business incentive isn't there as much for high performance on the web compared to desktop nor even mobile (which is starting to increase demand as Asia makes more AAA esque games for mobile).

              It's also not like advancement has been a priority for tech this decade so far.

              • pjmlp3 days ago
                Lets put it this way, a decade after WebGL 1.0, still there aren't any usable developer tools on browsers to sanily debug 3D rendering.

                To the point most studios would rather use something like streaming, where at least they enjoy the convenience of tooling like RenderDoc, PIX, Instruments, NInsight,....

    • Rusky3 days ago
      They could not "just" have used SPIR-V bytecode. WebGL already had to do a bunch of work to restrict GLSL's semantics to work in the web sandbox; whatever WebGPU chose would have had the same problem.
      • modeless3 days ago
        In comparison to inventing a new language and three independent interoperable implementations from scratch, I think "just" is appropriate.
      • mwkaufma3 days ago
        WGSL is defined as a "bijection" to a "subset" of spir-v -- they could have simply specced that subset.
        • MindSpunk3 days ago
          I do not believe this is true anymore and WGSL is a "real language" now that actually requires a "real language frontend" to compile. The original spec didn't even have while loops, they expected you to effectively write the SSA graph by hand (because SPIR-V is an SSA IR). Then the design drifted as they kept adding pieces until they accidentally made an actual language again.
        • pjmlp3 days ago
          That was the original plan, then they created their Rust inspired language instead.
          • mwkaufma2 days ago
            Yeah exactly, that's why refusing to simply spec the bytecode at the time was exactly the sabotage that everyone called it out as.
  • alkonaut3 days ago
    This is from 2021 and the main issue the author has with wgsl has long since been fixed. There is a lot less <f32> needed in ~2014~. Edit: in 2024
    • kookamamie3 days ago
      But, on the other hand, more time-travel required?
      • dwattttt3 days ago
        Not anymore. The time travel issue was fixed with time travel.
  • beeflet3 days ago
    WGSL seems to inherit a more rust-like syntax versus GLSL which is similar to C.

    I think the major advantage of WebGPU over WebGL2/OpenGLES3 is that you can write GPGPU shaders more easily versus OpenGL's Transform Feedback system which is very clunky. But this comes at a tradeoff of compatibility for the time being.

    But in the rust ecosystem at least, WebGPU has taken the role of OpenGLES with libraries like wgpu becoming dominant.

    • pjmlp3 days ago
      Which is kind of bonkers, settling down on an API designed for sandboxed browser enviornments that isn't able to expose the actual capabilities of any graphics card designed after 2016.
  • spookie3 days ago
    The built-ins are named inconsistently, aren't visually any different from other parts of the code, and the change from sets to groups when there are workgroups makes no sense.

    All around change for the sake of change.

    • jsheard3 days ago
      > All around change for the sake of change.

      More like change for the sake of politics, Apple didn't want to use any Khronos IP so the WebGPU committee had to work backwards to justify inventing something new from scratch, despite the feedback from potential users being overwhelmingly against doing that.

      Then after sending the spec on a multi-year sidequest to develop a shader language from scratch, Apple still hasn't actually shipped WebGPU in Safari, despite Google managing to ship it across multiple platforms over a year ago. Apple only needs to support Metal.

      • davexunit3 days ago
        Apple has been extremely slow about getting important features into Safari. They're about a year behind Chrome and Firefox on some WebAssembly things, too.
        • jsheard3 days ago
          Safari's limitations are baffling sometimes, like it's the only browser that doesn't support SVG favicons aside from a non-standard monochrome-only variant. Their engine supports SVG, just not in that context, and you'd think they would be all over resolution independent icons given their fixation on extremely high DPI displays.
          • pjmlp3 days ago
            On the other hand, they are the only wall left standing between the old Web and ChromeOS everywhere.
      • jdashg3 days ago
        > the WebGPU committee had to work backwards to justify

        Do you mean to allege "[the Apple delegates to] the WebGPU committee"? Because the committee as a whole has a ton of public minutes that show how strident the opposition to this was. (Probably filed under "This is not a place of honor" :)) I don't even want to re-read what I said at the time. No one involved, literally no one, is happy about that chapter, believe me. We are happy to be shipping something, though.

      • rahkiin3 days ago
        I’ve read this is because they are in a legal conflict with Khronos for a while already
      • flykespice3 days ago
        Apple is the epitome of Not Invented Here syndrome.
      • pjmlp3 days ago
        Blaming Apple is cool and all, yet Vulkan also uses GLSL, which WebGPU committee could have kept using for WebGPU, as evolution from WebGL, just like it happened on the OpenGL to Vulkan transition.
  • andrewmcwatters3 days ago
    Man, since the departure from OpenGL/OpenGL ES, graphics programming is such a pain in the butt. It's totally unfun and ridiculous.
    • johnnyanmac3 days ago
      That's more or less how graphics programming evolved over the last 20 years. Give up conviniences for the more performance, using as much control as GPU vendors would allow. It's basically inaccessible unless you're studying it in academia or employed in the few domains that require that power.

      This also sadly means that most tools to help navigate these are probably also trapped in some studio codebases. I remember those promises with Vulkan 1.0 where you could potentially just wait until others would make some boilerplate abstraction so other could learn graphics programming before diving deep down into every nitty gritty detail. I haven't looked extensively for that, but nothing came on my radar while navigating through learning Vulkan.

    • davexunit3 days ago
      The fragmentation has been really frustrating but if things like WebGPU and SDL GPU become well supported it will make doing modern graphics programming mostly pleasant. I love/hate OpenGL and will miss it in a strange way.
      • 3 days ago
        undefined
      • seivan3 days ago
        [dead]
    • 01HNNWZ0MV43FF3 days ago
      I finally switched to WebGL 2, I think, to get nicer shadow maps. I'll ride that as far as I can. personally I liked gles2 a lot. Everything ran it.
      • 3 days ago
        undefined
    • alkonaut3 days ago
      Not at all. OpenGL had that get-something-done quickly feel but it was riddled with limitations and unnecessary complexity.

      DX12/Vulkan means you do 2000 lines of boilerplate to get anywhere.

      WebGPU is actually a nice step back towards OpenGL. You can get off the ground much faster, while still being modern in the way the APIs work.

    • jms553 days ago
      What part do you dislike? If it's the complexity of newer APIs (Vulkan in 8 years old at this point, DirectX12 9 years), then you might like WebGPU or any of the other userspace graphics APIs such as blade or sdl3 that have been invented over the past few years.
      • unconed3 days ago
        Not OP, but IMO the real issue is pretending graphics is still about executing individual draw calls of meshes which map 1-to-1 to visible objects.

        It's not true anymore, because you have all sorts of secondary rendering (e.g. shadow maps, or pre-passes), as well as temporal accumulation. These all need their own unique shaders. With meshlets and/or nanite, culling becomes a cross-object issue. With deferred rendering, separate materials require careful set up.

        So now the idea that a dev can just bring their own shaders to plug into an existing pipeline kind of falls apart. You need a whole layer of infrastructure on top, be it node graphs, shader closures, etc. And dispatch glue to go along with it.

        This is all true even with WebGPU where you don't have to deal with synchronization and mutexes. Just a shit show all around tbh. Rendering APIs have not kept up with rendering techniques. The driver devs just threw up their hands and said "look, it's a nightmare to keep up the facade of old-school GL, so why don't you do it instead".

        • jms553 days ago
          > executing individual draw calls of meshes which map 1-to-1 to visible objects.

          This has not been true since deferred shading became popular around 2008. Shadow maps were around much earlier than that even.

          There's a reason the 1:1 draw:object API has fallen out of popularity - it doesn't scale well, be it CPU overhead, lighting, culling and geometry processing, etc.

          That said, you of course still can do this if you want to. Draw calls and vertex buffers haven't gone away by any means.

          > So now the idea that a dev can just bring their own shaders to plug into an existing pipeline kind of falls apart. You need a whole layer of infrastructure on top, be it node graphs, shader closures, etc. And dispatch glue to go along with it.

          That's the job of rendering engines, not graphics APIs. If you want to work at that layer, then you use a rendering/game engine that provides the tooling for technical artists. If you _are_ the rendering/game engine, then you're thankful for the increased level of control modern graphics APIs provide you to be able to realize better looking, higher performing (more stuff is possible), and more flexible tools to provide your tech artists with.

          > This is all true even with WebGPU where you don't have to deal with synchronization and mutexes. Just a shit show all around tbh. Rendering APIs have not kept up with rendering techniques. The driver devs just threw up their hands and said "look, it's a nightmare to keep up the facade of old-school GL, so why don't you do it instead".

          Users of the drivers got fed up with them being buggy, slow, and limited. The industry's response was to move as much code as possible out of the driver and into user space, exposing more control and low-level details to userspace. That way, you would never be bottlenecked by the driver, be it performance or bugs. The industry has realized time and time again that hardware companies are often bad at software, and it would be better to let third parties handle that aspect.

          The real failure of of the graphics industry imo was Vulkan 1.0 trying to cater to old mobile devices and modern desktop devices simultaneously, and much worse, never starting a large community project to communally develop a higher-level graphics API until WebGPU (which itself is underfunded). Even then its higher-level nature is largely a byproduct of wanting to enforce safety on untrusted webapps.

          But yes, even WebGPU is still more complicated than OpenGL 2. If you find graphics APIs too much work, you're not their target audience and you should be using a higher level API.

          • johnnyanmac3 days ago
            >If you find graphics APIs too much work, you're not their target audience and you should be using a higher level API.

            That's a pretty sad state of affairs given the "audience" is shrinking by the day. And then later those graphics programmers leave/get laid off by Unity/Epic/AAA Studio with a custom engine and they wonder why they can't find any DX12/Vulkan engineers to their satisfaction.

            For the lifeblood of the industry, tools need to also be learnable by hobbyists. At least, if you don't want to spend 6-12 months training your graphics programmers yourself. The courses I peeked at at my Alma mater (when Vulkan was still brand new) are still using OpenGL 3 to teach, so it doesn't sound like Universities are picking up the slack.

            • jms553 days ago
              Vulkan/DX12 _are_ learnable by hobbyists. This was a pretty popular post here on HN 4 months ago of someone learning Vulkan and making an engine in it https://news.ycombinator.com/item?id=40595741. Universities usually teach theory, oftentimes in the form of a raytracer on the CPU, or like you said a simple OpenGL renderer using some prebuilt abstractions. I don't think it really makes sense for them to teach how to use Vulkan well or how to make a fast renderer, the details of that often change quickly year by year anyways.

              > That's a pretty sad state of affairs given the "audience" is shrinking by the day. And then later those graphics programmers leave/get laid off by Unity/Epic/AAA Studio with a custom engine and they wonder why they can't find any DX12/Vulkan engineers to their satisfaction.

              That's more a symptom of how garbage working in the game development industry is, and less about any underlying technology. There's a reason I work on a game engine for fun, as my hobby, and not professionally despite having the option to do so. Everyone I spoke to in the industry talks about how terrible the working conditions are.

              A professional graphics developer I recently talked to summed it up well - everyone needs a game engine, but no one wants to pay people to make and maintain one.

              • johnnyanmac3 days ago
                >Vulkan/DX12 _are_ learnable by hobbyists.

                I did see that post. It is commendable, but we should also note that that author has 15 years of experience in tech and was already a solo developer as a hobbyist.

                It can be easy to forget that there's a lot of cruft and API to grok through for these things, things potentially out of the scope of students and juniors who haven't had to navigate codebases with millions of LoC in various states of disarray. That speaks more to our ability to tolerate the chaos than the learnability of the API.

                >I don't think it really makes sense for them to teach how to use Vulkan well or how to make a fast renderer, the details of that often change quickly year by year anyways

                From a learners' POV I agree. From the industry's point of view they want someone who can jump into the fray with minimal training. And we both hopefully understand that theory doesn't necessarily correlate to real world experience. So there's some critical bridge that is missing on some side that as of now industry just expects potential programmers to learn in their free time somehow.

                Which in and of itself still isn't a trivial matter. Because so much of this knowledge is tribal wisdom carried by industry. So you see where the issues add up. You'll find breadcrumbs here and there scattered across the net, but this is only adding more obstacles for people to hit that bar.

                >That's more a symptom of how garbage working in the game development industry is, and less about any underlying technology. There's a reason I work on a game engine for fun, as my hobby, and not professionally despite having the option to do so. Everyone I spoke to in the industry talks about how terrible the working conditions are.

                I can concur with that as someone in the industry. But there's not really that many places you can go to work professionally if you're not in games:

                - animation renderers (Pixar, Dreamworks, Illumination. maybe Laika), but the reputation in that industry isn't much better

                - various research firms that look more for PhD's if anything. Maybe some Masters students. So you're basically in acedemia land (which is known for its lack of pay, even compared to games).

                - and of course, the GPU companies. Nvidia, Intel, and AMD among a few others.

                It's a very niche field that requires very specialized knowledge. If no one's offering training nor even an above average pay for that, what are you going to do? If left unchecked, these kinds of fields will be the first to suffer the brain drain as pioneers start to retire or die off.

                >A professional graphics developer I recently talked to summed it up well - everyone needs a game engine, but no one wants to pay people to make and maintain one.

                I'd say that's the 2020's in general, yes. Everyone wants senior+ level workload with the pay of a junior. Meanwhile efficiency is going up and they instead try to pack on more work than ever to "compensate". Something's got to give.

                • jms553 days ago
                  Yeah I don't disagree with anything you said.
        • alkonaut3 days ago
          Yes but that’s where WebGPU really shines though isn’t it?

          Writing a render graph system (or any reasonably complex renderer that at least does several passes) in Vulkan means juggling with manual synchronization but writing one in WebGPU at least means you pay a little performance to not have to do that. If you want to graduate your renderer from WebGPU to Vulkan/DX12 you can pretty easily do that I imagine. So it front loads the fun and lets you postpone the boring boilerplate somewhat.

          Obviously rendering is always going to be about managing dozens of descriptor sets and pipelines and controlling which resources are written and copied when and where. But WebGPU strikes a pretty good balance for complexity I think.

          • pjmlp3 days ago
            No you can't, because you will need to rewrite all shaders anyway, unless already using the approach to use something else to generate WGSL.

            It isn't because of fun that most Web 3D frameworks like Threejs, Babylonjs and PlayCanvas provide their own shading abstractions, three shading languages to target now.

            • alkonaut3 days ago
              If you write a rust+wgpu renderer now, your options are already Rust, glsl, wgsl (and probably more). You could do Spir-V too on webgpu so long as you stick to desktop. I'm sure we'll see people load glsl on webgpu on the web too. Any reasonably complex renderer will include shader generation/reflection/translation and so on. Just having a hole where you can plug vanilla hlsl/glsl shaders seems almost impossible.
              • pjmlp3 days ago
                I rather use WebGPU, on the Web, using it for native is always going to be playing catchup with middleware engines that don't have to worry about targeting a 2016 hardware design, as minimum viable product across all major native APIs, and with browser sandboxing in mind.

                Although it appears to be the next managed 3D API for Android userspace, as communicated at SIGGGRAPH, then again it is better than being stuck with GL ES 3.x as it is now.

                So a matter of perspective I guess.

        • andrewmcwatters3 days ago
          Yep... You nailed it. It really bums me out. There's a lot you can do with simple 90s era graphics programming while still using the newer APIs, but you'll hit bottlenecks very quickly, or run into architectural issues as soon as you want to implement modern rendering techniques.
  • jesse__3 days ago
    Fuck me .. as if we needed another shading language.