63 pointsby bntra day ago12 comments
  • meta-metaa day ago
    Nice! My partner predicted this in some album art she did for a friend. https://badbraids.bandcamp.com/album/supreme-parallel
  • talkingtaba day ago
    While the demo is great, and the 4D stuff is very cool, for me the amazing thing is the code to do this. Three.js opens a door to using webgl & webgpu, and shaders open yet more doors.
  • Very cool. Have you tried applying it to a cube sphere/are the results are contiguous? I'd be interested in incorporating it into a hybrid planetary science/storymapping project I'm working on.
    • bntra day ago
      I'm not entirely sure I understand the question. I doubt that any kind of sphere other than the abstract mathematical one (X²+Y²+...= 1) would be suitable for transformations like stereographic projection.
  • jimmySixDOFa day ago
    For anyone interested in exploring Godot are working on a json spec for 4D Shapes for rendering and physics -- it's called G4MF (Good 4D Model Format) loosely based on Khronos glTF -- still a work in progress but there is playground editor support for x/y/z/w

    https://github.com/godot-dimensions/g4mf

    • bntr18 hours ago
      Thanks! I didn’t know about G4MF — looks cool. What I’ve missed more often, though, is 5×5 matrices for real 4D transformations.
  • tasoeura day ago
    I wonder if there’s something interesting visually if this shader could be explored immersively (VR). Could be worth prototyping it on my little app :-) (https://shader.vision).
    • bntra day ago
      VR has come up a couple of times in response to my experiments - maybe it’s time I give it a try.

      I once tried a cross-eye 4D view: https://github.com/bntre/40-js

  • ivanjermakova day ago
    Because transformation happens in the vertex shaders, curvature would not work on low-poly objects. For this reason camera distortion is usually implemented in clip space (only after non-distorted frame is ready)
    • bntra day ago
      Do you mean applying geometric distortion in the fragment shader? I'm not quite sure how that would work (I'm not so familiar with shaders at that level).

      I've heard of true 3D bump mapping being done in fragment shaders (not just lighting), but I can't really imagine how more radical geometric distortion could be implemented there.

      • ivanjermakova day ago
        Fragment shader distortion suffers from another issue: heavy distortions require higher resolutions and (depending on distortion type) higher field of view. Even more radical distortions would require cubemaps of undistorted frames to handle fragmens from behind the camera.

        This answer suggests some other ideas on implementing lens distortion: https://stackoverflow.com/a/44492971

        • bntr16 hours ago
          Thanks! The cube mapping idea is really interesting — I didn’t know about that approach. However, I doubt it would help in my case, where the distortion is strong enough to flip the depth order of objects.

          Maybe these methods could be combined somehow, but it seems simpler to use subdivision (as also mentioned in that thread) — perhaps selectively, for objects near the periphery where distortion is strongest.

  • tetris11a day ago
    I like it as a curiousity, but it only makes sense to me if I think of it 2D scene to 3D sphere.

    Is 4D sphere the upper limit on this method, or can you project say 3D scene onto 5D sphere? (e.g a 1D line onto a 3D sphere analog)

    • bntra day ago
      The 4D sphere makes sense here because its surface is 3-dimensional. That means I can project the model from 4D sphere back to 3D in a bijective (one-to-one) way.

      You could project from 5D down to 3D, but the dimensional mismatch breaks the bijection - you'd lose information or overlap points. However, a 4D → 5D → 4D projection would preserve structure, though it gets harder to visualize.

      I chose 3D ↔ 4D specifically because curved 3D space is much more intuitive and has direct physical meaning - it corresponds to positively curved space (see e.g. https://en.wikipedia.org/wiki/Shape_of_the_universe#Universe... )

  • fallinditcha day ago
    Good job, a lovely idea! It reminds me of AI morphing animation, I wonder if these techniques can be combined...
  • Sourabhsss1a day ago
    This is interesting...
  • Duanemclemorea day ago
    This is rad. The game is especially cool. Congrats, OP!

    This is the same math as this old program called Jenn3d[0] which I played around with almost twenty years ago. (Amazingly the site is still online!) The crazies who built it also built it to play Go in 3d projective space. I was never able to play Go with it, but I've been in to projective geometries since.

    OP - if you want to try something else cool with 4d to 3d projective geometries, here's an idea I ran across working with 3d to 2d.

    I make a tool for generating continuous groupings of repetitive objects in architectural computation. [1] When faced with trying to view the inside of lattices containing sets of solids which tile space continuously, I tried a few different methods (one unsuccessful but cool looking one here [2])

    So when I created the sphere upon which to project the objects in the lattice, rather than just project the edges I made concentric spherical section planes and projected the intersection of those with the objects. [3] By using objects parallel to the projection plane to cut sections I was able to generate spacings between the final generated section lines that mapped how oblique the surface being cut was from the ray projecting from the centerpoint of the sphere to its surface.

    Sorry OP, that's a long description. TL;DR - instead of projecting 3d mesh edges to a 4d sphere then back down to 3d space, what if you tried describing the meshes as the intersection of their 3d geometry with 4d hyperspheres parallel to the projection hypersphere? It would look more abstract, but I bet it would look cool as heck, especially navigating in 3d projective space!

    [0] https://jenn3d.org/ [1] https://www.food4rhino.com/en/app/horta [2] https://vimeo.com/698774461 [3] https://vimeo.com/698774461

    p.s. Also, if any actual geometers are reading this - I'd love to co-author a math paper that more rigorously considers what I explored / demonstrated with the drawings above. I have a whole set of them methodically stepping through the process, and could generate more at will. I also have a paper about it I can send on request (or if you can hunt down the Design Communication Association Conference Proceedings 2022).

    • bntra day ago
      Thanks for the kind words and for sharing your thoughts! I actually remember Jenn3d as well — the animations always reminded me of some kind of shimmering foam.

      Unfortunately, I couldn’t quite grasp the method you’re describing — perhaps I’m missing some illustrations. (By the way, links [2] and [3] seem to point to the same video, and I’m not sure they match your description.)

      It sounds like you’re suggesting a way to slice objects into almost-repetitive sections, so the brain can reconstruct a fuller picture — a bit like how compound eyes work in insects.

      • Duanemclemore19 hours ago
        That's so strange. For some reason it gave me the link for a completely different video...

        Anyway - here's

        [2] https://vimeo.com/757057720

        and [3] https://vimeo.com/757062988

        Yeah, jenn was really rad. It's red meat to me when anyone's working on these kinds of projections.

        Since without the proper explanation the whole "concentric spherical section planes" thing is unclear (and actually, they wouldn't be section "planes" in the first place), here's the paper I was referencing:

        https://www.academia.edu/129490488/Visualizing_Space_Group_H...

        (see pg. 3 for a visual explanation that I hope helps.)

        I intersected the objects in the lattice with spheres to create lines, then projected those to the outer sphere and down to the 2d plane. In the same way, you could use concentric hyperspheres to intersect a 3d object serially, then project those intersections back to 3d space...

        • bntr17 hours ago
          Thanks — your method makes more sense now. I’m not very familiar with architectural design problems, so I didn’t fully grasp how this technique helps build a more complete understanding of the internal structure of composed objects. The final image reminds me of a kind of holographic source.

          When I think in that direction, it seems more appropriate not to add spatial dimensions (like 4D), but to add animation to your method (shifting or rotating the original composed object). That might help an untrained viewer better understand the usefulness of the final projection.

  • bobsmootha day ago
    God I wish I could understand 4D geometry.
  • qwertoxa day ago
    These projections, how do they make sense?

    I can project a 3D item onto a 2D plane, but only observe it because I'm outside of that 2D plane. This is like expecting the 2D plane to see itself and deduce 3D-dimensionality from what it sees. Like a stickman. It would only be able to raycast from its eye in a circle. It could do so from multiple points on the plane, but still, how would it know that it is looking at the projection of a sphere?

    • bntra day ago
      The surface of a 4D sphere (a 3-sphere) is itself 3-dimensional (just like the surface of an ordinary 3D ball is 2D). So when I use the hypersphere in intermediate computations, I’m not actually adding an extra dimension to the world.

      What this transformation does give me is a way to imagine a closed, finite 3D space, where any path you follow eventually loops back to where you started (like a stickman walking on the surface of a globe). Whether or not that space “really” needs a 4th spatial dimension is less important than the intuition it gives: this curved embedding helps us visualize what a positively curved 3D universe might feel like from the inside.