I've been working on a WebGL viewer to render the SVRaster Voxel scenes from the web, since the paper only comes with a CUDA-based renderer. I decided to publish the code under the MIT license. Here's the repository: https://github.com/samuelm2/svraster-webgl/
I think SVRaster Voxel rendering has an interesting set of benefits and drawbacks compared to Gaussian Splatting, and I think it is worth more people exploring.
I'm also hosting it on vid2scene.com/voxel so you can try it out without having to clone the repository. (Note: the voxel PLY file it downloads is about 50MB so you'll probably have to be on good WiFi).
Right now, there's still a lot more optimizations that would make it faster. I only made the lowest-hanging fruit optimizations. I get about 60FPS on my Laptop 3080 GPU at 2k resolution, and about 10-15 FPS on my iPhone 13 Pro Max.
On the github readme, there's more details about how to create your own voxel scenes that are compatible with this viewer. Since the original SVRaster code doesn't export ply, theres an extra step to convert those voxel scenes to the ply format that's readable by the WebGL viewer.
If there's enough interest, I'm also considering doing a BabylonJS version of this
Also, this project was made with heavy use of AI assistance ("vibe coded"). I wanted to see how it would go for something graphics related. My brief thoughts: it is super good for the boilerplate (defining/binding buffers, uniforms, etc). I was able to get simple voxel rendering within minutes / hours. But when it comes to solving the harder graphics bugs, the benefits are a lot lower. There were multiple times where it would go in the complete wrong direction and I would have to rewrite portions manually. But overall, I think it is definitely a net positive for smaller projects like this one. In a more complex graphics engine / production environment, the benefits might be less clear for now. I'm interested in what others think.
Regarding AI coding assistance... Yeah, I imagine it would be great for someone who wants to start hundreds of small projects, because it's pretty cool what you can do in a few minutes, but it really doesn't take many days before you're much better off doing most things yourself. I guess this state of affairs is relatively good for our job security, but a real bummer from the "I want to harness the power of hundreds of digital junior devs to do my bidding" perspective. Still something I'd consider solidly useful and not just a load of hype though.
BTW, it seems a bit senseless to draw frames when there is no interaction since the geometry is static. Maybe it wouldn't be a bad idea to stop drawing when there are no active interactions.
And I agree re: constantly drawing frames vs only drawing on change, that's a good optimization to make.
In your observation, is this technique restricted to static geometry? Or, is there a clear path to rendering a skinned animated character using SVRaster?
If hypothetically someone figured out some magical algorithm to do ray tracing at RTX levels of performance on a CPU, it would severely hurt them right? (Hurt them in terms of GPU market, probably a non issue now AI most of their market.)
Intel and AMD are both big players with a vested interest in promoting the capability of CPUs and promoting the capability of non-NVIDIA GPUs, since they sell both. They're big and well-capitalized, so if they wanted to they could be operating big graphics research teams (whether they are is unclear to me, and it's obviously not a 'snap your fingers and you have a big graphics research team' situation, but they have the resources)
In some cases if you see CUDA being used for a demo or a research project it's just because the alternative stacks kinda suck, not because only NVIDIA hardware is capable of doing the thing. Researchers aren't necessarily concerned with shipping on 99% of consumer machines, so they can reach for the most convenient approach even if it's vendor-locked.
I won't be surprised if we see some researchers start targeting Apple's platform-locked Metal API in the future since they have compelling unified memory offerings with capacities exceeding everybody else's.
https://vid2scene.com/voxel/?url=https://huggingface.co/samu...
I'd try to go higher but I start running out of VRAM when training a scene with too many voxels. Maybe I'll spin up a runpod instance to train higher-voxel scene on a beefier GPU.