I'm sure Vulkan is fun and wonderful for people who really want low level control of the graphic stack, but I found it completely miserable to use. I still haven't really found a graphics API that works at the level I want that I enjoyed using; I would like to get more into graphics programming since I do think it would be fun to build a game engine, but I will admit that even getting started with the low level Vulkan stuff is still scary to me.
I think what I want is something like how SDL does 2D graphics, but for 3D. My understanding is that for 3D in SDL you just drop into OpenGL or something, which isn't quite what I want.
Maybe WebGPU would be something I could have fun working on.
Although after writing an entire engine with it, I ended up wanting more control, more perf, and to not be limited by the lowest common denominator limits of the various backends, and just ended up switching back to a Vulkan-based engine.
However, I took a lot of learnings from the SDL GPU code, such as their approach to synchronization, which was a pattern that solved a lot of problems for me in my Vulkan engine, and made things a lot easier/nicer to work with.
Vulkan was meant to succeed OpenGL, and despite my annoyances with the API, I still think that it's nice to have an open standard for these things, but now there isn't any graphics API that works on everything.
I just want OpenGL, it was the perfect level of abstraction. I still use it today, both at work and for personal projects.
If only that were true for the resource binding model ;) WebGPU BindGroups are a 1:1 mapping to the Vulkan 1.0 binding model, and it's also WebGPU's biggest design wart. Even Vulkan is moving away from that overly rigid model, so we'll probably be stuck with a WebGPU that's more restrictive than required by any of its backend APIs :/
There’s another wrapper abstraction we all love and use called BGFX that is nice to work with. Slightly higher level than Vulkan or Metal but lower than OpenGL. Works on everything, consoles, fridges, phones, cars, desktops, digital signage.
My own engines have jumped back and forth between WebGPU and BGFX for the last few years.
WebGPU is a standard, not necessarily for the web alone.
At no point does a browser ever enter the picture.
However, throw a bunch of engineers in a room…
When wgpu got mature enough, they needed a way to expose the rust API for other needs. The C wrapper came. Then for testing and other needs, wgpu-native. I’m not a member of either team so I can’t say why for sure but because of those decisions, we have this powerful abstraction available pretty much on anything that can draw a web page. And since it’s just exposing the buffers and things that Vulkan, Metal, etc are already based on, it’s damned fast.
The added benefit is you get WGSL as your shading language which can translate into any and all the others.
The downsides are it provides NO WINDOW support as that needs to be provided by the platform, i.e. you. Good news is the tests and stuff use glfw and it’s the same setup to get Vulkan working as it is to get WebGPU working. Make window, probe it, make surface/swap chain, start your threads.
The It's true that you can use Dawn and wgpu from native code but that's all outside the spec.
https://eliemichel.github.io/LearnWebGPU/introduction.html
> Yeah, why in the world would I use a web API to develop a desktop application?
> Glad you asked, the short answer is:
Reasonable level of abstraction
Good performance
Cross-platform
Standard enough
Future-proofThe intent and the application are never squarely joined. Yes it’s made for the web. However, it’s an API for graphics. If you need graphics, and you want to run anywhere that a web page could run, it’s a great choice.
If you want to roll your own abstraction over Vulkan, Metal, DX12, Legacy OpenGL, Legacy DX11, Mesa - be my guest.
You don't know it yet, but what you really want is DirectX 9/10/11.
The reason you don’t is that it does an amount of bookkeeping for you at runtime, only supports using a single, general queue per device, and several other limitations that only matter when you want to max out the capabilities of the hardware.
Vulkan is miserable, but several things are improved by using a few extensions supported by almost all relevant vendors. The misery mostly pays off, but there are a couple of cases where the API asks you for a lot of detail which all major drivers then happily go ahead ignore completely.
The last one has profound effects for concurrency, because it means you don’t have to serialize texture reads between SAMPLED and STORAGE.
If I was a beginner looking to get a basic understanding of graphics and wanted to play around, I shouldn’t have to know or care what a “shader” is or what a vertex buffer and index buffer are and why you’d use them. These low level concepts are just unnecessary “learning cliffs” that are only useful to existing experts in the field.
Maybe unpopular opinion: only a relative handful of developers working on actually making game engines need the detailed control Vulkan gives you. They are willing to put up with the minutiae and boilerplate needed to work at that low level because they need it. Everyone else would be better off with OpenGL.
OpenGL still works. You can set up an old-school glBegin()-glEnd() pipeline in as few as 10 lines of code, set up a camera and vertex transform, link in GLUT for some windowing, and you have the basic triangle/strip of triangles.
OpenGL is a fantastic way to introduce people to basic graphics programming. The really annoying part is textures, which can be gently abstracted over. However, at some point the abstractions will start to be either insufficient in terms of descriptive power, or inefficient, or leaky, and that's when advanced courses can go into Vulkan, CPU and then GPU-accelerated ray tracing, and more.
With that said we decided to focus on DX12 eventually because it just made sense. I've written our platform layers targetting OpenGL, DX12, Vulkan and Metal and once you've just internalized all of these I really don't think the horribleness of the lower level APIs is as bad as people make them out to be. They're very debuggable, very clear and well supported.
BTW: If anyone says OpenGL is "deprecated", laugh in their face.
There is no "technical" solution to this, no Even better API that would make them support it, as it's a business decision as much as anything else.
I know it is on Apple, but let's just assume I don't care about Apple specifically.
Honestly, starting out with OpenGL and moving to DX12 (which gets translated to Vulkan on Linux very reliably) is not a bad plan overall; DX12 is IMO a nicer and better API than Vulkan while still retaining the qualities that makes it an appropriate one once you actually want control.
Edit:
I would like to say that I really think one ought to use DSA (Direct State Access) and generally as modern of a OpenGL usage as one can, though. It's easy to get bamboozled into using older APIs because a lot of tutorials will do so, but you need to translate those things into modern modern OpenGL instead; trust me, it's worth it.
Actual modern OpenGL is not as overtly about global state as the older API so at the very least you're removing large clusters of bugs by using DSA.
On top of that you can just use a much better shading language (HLSL) with DX12 by default without jumping through hoops. I did set up HLSL usage in Vulkan as well but I'm not in love with the idea of having to add decorators everywhere and using a 2nd class citizen (sort of) language to do things. The mapping from HLSL to Vulkan was also good enough but still just a mapping; it didn't always feel super straight forward.
(Edit: To spell it out properly, I initially used GLSL because I'm used to it from OpenGL and had previously written some Vulkan shaders, but the reason I didn't end up using GLSL is because it's just very, very bad in comparison to HLSL. I would maybe use some other language if everything else didn't seem so overwrought.)
I don't hate Vulkan, mind you, I just wouldn't recommend it over DX12 and I certainly just prefer using DX12. In the interest of having less translation going on for future applications/games I might switch to Vulkan, though, but still just write for Win32.
If you make a game instead of a game engine, you can use one of the existing engines.
The other big push would be Epic cutting royalties until you're earning a significant amount, which would encourage studios not to hire or allocating as much resources to in-house.
In fact the "engine" part itself is quite small compared to the editor, and the hardest things can be done with third-party solutions, a lot open source: physics, rendering, audio, ECS, controls, asset loading, shader conversion.
The reason people gravitate towards Unity/Unreal is because of the low barrier to entry. This caused the monoculture among hobbyists.
The reason studios are gravitating to those engines is because of there is plenty of cheap labour available.
Definitely recommend starting with a more "batteries included" framework, then trying your hand at opengl, then Vulkan will at least make a bit more sense. SDL is a decent place to start.
A lot of the friction is due to the tooling and debugging, so learning how to do that earlier rather than later will be quite beneficial.
I'm just going to dump some links really quick, which should get anyone started.
Getting a framebuffer on screen: https://github.com/zserge/fenster
I would recommend something like SDL if you want a more complete platform abstraction, it even supports software rendering as a context mode.
Filling solid rectangles is the obvious first step.
Loading images and copying pixels onto parts of the screen is another. I recommend just not drawing things that intersect the screen boundaries to get started. Clipping complicates things a bunch but is essential.
Next up: ghetto text blitting https://github.com/dhepper/font8x8 I dislike how basically every rendering tutorial just skips over drawing text on screen, which is super useful for debugging.
For drawing single pixel lines, this page has everything on Bresenham:
http://members.chello.at/easyfilter/bresenham.html
For 2d rasterization, here's an example of 3 common approaches: https://www.mathematik.uni-marburg.de/~thormae/lectures/grap...
Scanline rasterization tought me a lot about traversing polygons, I recommend trying it even if you end up preferi g a different method. Sean Barrett has a good overview: https://nothings.org/gamedev/rasterize/
Side note: analytical antialising is fast, but you should be carefull with treating alpha as coverage, the analytic approaches tell you how much of a pixel is covered, not which parts are.
For 3d rasterization Scratchapixel is good: https://www.scratchapixel.com/lessons/3d-basic-rendering/ras...
Someone mentioned the Pikuma course which is also great, though it skips over some of the finer details such as fixed point rasterizing.
For good measure here's some classic demoscene effects for fun: https://seancode.com/demofx/
Anyway, this is just scratching the surface, being progressively able to draw more and more types of primitives is a lot of fun.
OpenGL was designed as a way to more or less do that and it turned complicated fast.
Sadly 1) Apple only, 2) soft deprecated.
I imagine it will still be around for a long time because Apple and a lot of large third party apps use it for simple 3D experiences. (E.g. the badges in the Apple Fitness app).
Apple wants devs to move to RealityKit, which does support non-AR 3D, but it is still pretty far from feature parity with SceneKit. Also RealityKit still has too many APIs that are either visionOS only or are available on every platform but visionOS.
Microrant: I absolutely loathe when I am told "move to new thing. Old thing is deprecated/unsupported" and the new thing is incredibly far from feature parity and usually never reaches parity, let alone exceeds it. This is not just an Apple problem.
In general, it suffered from the problem of even Apple not knowing what it was made for, and what it even is. For a 3D API, it has less features than OpenGL 2. For a game engine, it… also has way less features than the competition, which shouldn’t surprise anyone - game engines are hard, and the market leaders have been developed for _decades_. But that’s what it looks like the most - a game engine. (It even has physics.)
Customizing the rendering pipeline in SceneKit is absolutely horrible. The user is given a choice between two equally bad options: either adding SCNTechniques which are configurable through .plists and provide no feedback on what goes wrong with their configuration (as like 3D rendering isn’t hard enough already), or using “shader modifiers” - placing chunks of Metal code into one of 4 places of the SceneKit’s default shader which the end users _don’t even have the source code of_ without hacking into the debug build! Or pulling it from Github from people who already did that [_].
If you just need something that can display 3d data, SceneKit is still fine, but once there’s a requirement to make that look good, it’s better to throw everything away and hook up Unity instead.
[_] https://gist.github.com/warrenm/794e459e429daa8c75b5f17c0006...
I find SDL3 more fun and interesting, but it’s a ton of work to to get going.
I personally have just been building off of tutorials. But notwithstanding all of the boilerplate code, the enjoyability of a code base can be vastly different.
The most fun I’ve ever had coding, and still do at times, is with WebGL. I just based it off of the Mozilla tutorial and went from there. WebGLFundamentals has good articles…but to be honest I do not love their code
I wonder if I can get it working with F# in Linux…
Frank Luna’s D3D11 bible is probably the closest thing we’ll get to a repetition spaced learning curriculum for 3D graphics at a level where you can do an assload with the knowledge.
No, it won’t teach you to derive things. Take Calculus I and II.
No, it won’t teach you about how light works. Take an advanced electrical engineering course on electromagnetism.
But it will teach you the nuts and bolts in an approachable way using what is by far an excellent graphics API, Direct3D 11. Even John Carmack approves.
From there on all the Vulkan, D3D12 shit is just memory fences buffers and queue management. Absolute trash that you shouldn’t use unless you have to.
Plenty of people make minecraft-like games as their first engine. As far as voxel engines go, a minecraft clone is "hello, world."
I remember reading NeHe OpenGL tutorials about 23 years ago. I still believe it was one of the best tutorial series about anything in the way they were structured and how each tutorial built over knowledge acquired in previous ones.
Tbh, OpenGL sucks just as much as Vulkan, just in different ways. It's time to admit that Khronos is simply terrible at designing 3D APIs ;) (probably because there are too many cooks involved)
I don't like Vulkan. I keep thinking did nobody look at this and think 'there must be a better way' but it's what we've got and mostly it's just learn it and write the code once
To fix this AMD developed Mantle in 2013. This inspired others: Apple released Metal in 2014, Microsoft released DX12 in 2015, and Khronos released Vulkan in 2016 based on Mantle. They're all kind of similar (some APIs better than others IMO).
OpenGL did get some extensions to improve it too but in the end all the big engines just use the other 3.
Direct3D (and Mantle) had been offering lower level access for years, Vulkan was absolutely necessary.
It’s like assembly. Most of us don’t have to bother.
The commonalities to both are:
- Instances and devices
- Shaders and programs
- Pipelines
- Bind groups (in WebGPU) and descriptor sets (in Vulkan)
- GPU memory (textures, texture views, and buffers)
- Command buffers
Once I was comfortable with WebGPU, I eventually felt restrained by its limited feature set. The restrictions of WebGPU gave me the motivation to go back to Vulkan. Now, I'm learning Vulkan again, and this time, the high-level concepts are familiar to me from WebGPU.
Some limitations of WebGPU are its lack of push constants, and the "pipeline explosion" problem (which Vulkan tries to solve with the pipeline library, dynamic state, and shader object extensions). Meanwhile, Vulkan requires you to manage synchronization explicitly with fences and semaphores, which required an additional learning curve for me, coming from WebGPU. Vulkan also does not provide an allocator (most people use the VMA library).
SDL_GPU is another API at a similar abstraction level to WebGPU, and could also be another easier choice for learning than Vulkan, to get started. Therefore, if you're still interested in learning graphics programming, WebGPU or SDL_GPU could be good to check out.
The question you need to ask is: "Do I need my graphics to be multithreaded?"
If the answer is "No"--don't use Vulkan/DX12! You wind up with all the complexity and absolutely zero of the benefits.
If performance isn't a problem, using anything else--OpenGL, DirectX 11, game engines, etc.
Once performance becomes the problem, then you can think about Vulkan/DX12.
Programmers should absolutely not be using DX12/Vulkan unless they understand exactly why they should be using it.
I really hope SDL3 or wgpu could be the abstraction layer that settles all these down. I personally bet on SDL3 just because they have support from Valve, a company that has reasons to care about cross platform gaming. But I would look into wgpu too (...if I were better at rust, sigh)
With Vulkan this is borderline impossible and it becomes messy quite quickly. It's very low level. Unlike OpenGL, one really needs an abstraction layer on top, so you either gotta use a library or write your own in the end.