4 pointsby matroid8 hours ago2 comments
  • vunderba7 hours ago
    Great stuff. I especially like the pumpkin example, where you can basically place the light inside the pumpkin and have it illuminate the interior. It’s a really nice effect, and very well done.

    I didn't see an example but do you have any idea how well this approach would work with pixel-art / video game imagery?

    I’d love to see it applied to something like a Myst-style game / point and click adventure game to use the dynamic light to illuminate different parts of the world.

    • matroid7 hours ago
      That's a good question!

      We made the training data for this using a physically based renderer (i.e, Cycles in Blender).

      For non-photorealistic art, there are way too many ways in which artists communicate lighting (even in sketch, which is different from pixel-art / video game, there are at least a couple of ways (a) solid shading (b) hatching).

      I'm not sure how to extend our methodology (synthetic data in 3D modeling software + train model) to these settings.

      Ooh, I didn't know about Myst-style games. I'll definitely check them out!

      • vunderba7 hours ago
        That makes sense. I almost thought maybe one way to do it might be to use generative AI with a ControlNet to create a more photorealistic 3D version of each of the pixel-art environments, and then use those images with your approach.... then somehow separate the lighting information and mask it over the original pixel art?

        But I’m not sure if that’s actually feasible or how it would work technically. :)

        • matroid5 hours ago
          I guess it would be a good experiment these days, to see how Nano Banana or OpenAI's latest image to image models will be at transferring lighting while preserving the original style domain, like you suggest.

          Thanks for the thoughts :)

  • sev_verso5 hours ago
    [dead]