47 pointsby samuelm29 days ago4 comments
  • samuelm29 days ago
    Hi all! Several weeks ago, Nvidia released a voxel-based radiance field rendering technique called SVRaster. I thought it was an interesting alternative to Gaussian Splatting, so I wanted to experiment with it and learn more about it.

    I've been working on a WebGL viewer to render the SVRaster Voxel scenes from the web, since the paper only comes with a CUDA-based renderer. I decided to publish the code under the MIT license. Here's the repository: https://github.com/samuelm2/svraster-webgl/

    I think SVRaster Voxel rendering has an interesting set of benefits and drawbacks compared to Gaussian Splatting, and I think it is worth more people exploring.

    I'm also hosting it on vid2scene.com/voxel so you can try it out without having to clone the repository. (Note: the voxel PLY file it downloads is about 50MB so you'll probably have to be on good WiFi).

    Right now, there's still a lot more optimizations that would make it faster. I only made the lowest-hanging fruit optimizations. I get about 60FPS on my Laptop 3080 GPU at 2k resolution, and about 10-15 FPS on my iPhone 13 Pro Max.

    On the github readme, there's more details about how to create your own voxel scenes that are compatible with this viewer. Since the original SVRaster code doesn't export ply, theres an extra step to convert those voxel scenes to the ply format that's readable by the WebGL viewer.

    If there's enough interest, I'm also considering doing a BabylonJS version of this

    Also, this project was made with heavy use of AI assistance ("vibe coded"). I wanted to see how it would go for something graphics related. My brief thoughts: it is super good for the boilerplate (defining/binding buffers, uniforms, etc). I was able to get simple voxel rendering within minutes / hours. But when it comes to solving the harder graphics bugs, the benefits are a lot lower. There were multiple times where it would go in the complete wrong direction and I would have to rewrite portions manually. But overall, I think it is definitely a net positive for smaller projects like this one. In a more complex graphics engine / production environment, the benefits might be less clear for now. I'm interested in what others think.

    • jchw7 days ago
      Very interesting. I was surprised to get a relatively decent 40-60 FPS screwing around on the demo for a few minutes on my Pixel 9 in Fennec F-Droid. I am even more surprised that it performed better than an iPhone in any conditions, but who knows what exactly is playing into that.

      Regarding AI coding assistance... Yeah, I imagine it would be great for someone who wants to start hundreds of small projects, because it's pretty cool what you can do in a few minutes, but it really doesn't take many days before you're much better off doing most things yourself. I guess this state of affairs is relatively good for our job security, but a real bummer from the "I want to harness the power of hundreds of digital junior devs to do my bidding" perspective. Still something I'd consider solidly useful and not just a load of hype though.

      BTW, it seems a bit senseless to draw frames when there is no interaction since the geometry is static. Maybe it wouldn't be a bad idea to stop drawing when there are no active interactions.

      • samuelm27 days ago
        I was also surprised that my iPhone didn't do too well with it. Regular Gaussian Splatting runs pretty well on my iPhone, almost as good as my laptop (of course, my phone screen is a lot smaller so its rendering a lot fewer pixels). The fragment shader for this is very heavy compared to gaussian splatting, so maybe it has to do with that.

        And I agree re: constantly drawing frames vs only drawing on change, that's a good optimization to make.

    • corysama9 days ago
      Great work! They'd love to see this over in https://old.reddit.com/r/GaussianSplatting/ :)

      In your observation, is this technique restricted to static geometry? Or, is there a clear path to rendering a skinned animated character using SVRaster?

      • samuelm28 days ago
        Right now, its pretty restricted to relatively static geometry. You can do simple transforms like scaling, rotating, etc to groups of voxels, so maybe down the line you'd be able to animated them!
  • lawlessone7 days ago
    Bit of a related shower thought here.. but solutions that require NVidias high end hardware are always going to be the solutions NVidia promotes right?

    If hypothetically someone figured out some magical algorithm to do ray tracing at RTX levels of performance on a CPU, it would severely hurt them right? (Hurt them in terms of GPU market, probably a non issue now AI most of their market.)

    • kevingadd7 days ago
      There are plenty of graphics researchers not in NVIDIA's pocket cooking up stuff that doesn't require vendor-specific features, so I'm not worried that graphics research is being suppressed, if that's your theory.

      Intel and AMD are both big players with a vested interest in promoting the capability of CPUs and promoting the capability of non-NVIDIA GPUs, since they sell both. They're big and well-capitalized, so if they wanted to they could be operating big graphics research teams (whether they are is unclear to me, and it's obviously not a 'snap your fingers and you have a big graphics research team' situation, but they have the resources)

      In some cases if you see CUDA being used for a demo or a research project it's just because the alternative stacks kinda suck, not because only NVIDIA hardware is capable of doing the thing. Researchers aren't necessarily concerned with shipping on 99% of consumer machines, so they can reach for the most convenient approach even if it's vendor-locked.

      I won't be surprised if we see some researchers start targeting Apple's platform-locked Metal API in the future since they have compelling unified memory offerings with capacities exceeding everybody else's.

    • LegionMammal9787 days ago
      Easy, games will just push for higher resolutions/framerates/SFX layers until it can no longer be done on a CPU. What Andy giveth, Bill taketh away.
    • thfuran7 days ago
      Maybe then the GPUs could finally reach the ray counts needed to avoid all the temporal smearing and AI interpolation hacks.
  • carlosdp7 days ago
    Awesome! I'm super impressed with SVRaster, glad to see others already playing with it too
  • Reubend7 days ago
    Cool project! Is there a test scene we can see with much higher resolution? The current one looks super artificial due to the rather chunky voxels.