Possibly you do the subdivisions along the edges uniformly in the target space, and map them to uniform subdivisions in the source space, but that's not correct.
edit:
Comparison of the article's and the correct perspective transform:
Completely backwards. Math is much more difficult than programming and LLMs still can't consistently add numbers correctly last I checked. What a strange attitude to take.
It’s fascinating how plausible it looks at a glance while being so glaringly wrong once you look at it more closely.
Edit: I think there’s a lot of confusion because the edges of the cube (the black lines), do not incorporate the perspective transform all along their edge. The texture is likely correct given the focal length, and the cube’s edge is misleadingly straight. My bad, the technique is valid, but the black lines of the cube’s edge are misleadingly straight (they are not rendered the same way as the texture)
I opened an issue ticket on the repository with a simple suggested fix and a comparison image.
More reading: https://retrocomputing.stackexchange.com/questions/5019/why-...
This is because perspective transform requires a divide per pixel and it was too costly on the CPUs of the time, so they skipped it to get acceptable performance.
It's funny that, in today's CPUs, floating point divide is so much faster than integer divide.
A friend was writing a flight simulator from scratch (using Foley and van Dam as reference for all the math involved). A classic perspective problem might be a runway.
Imagine a regularly spaced dashed line down the runway. If you get your 3D renderer to the stage that you can texture quads with a bitmap, it might seem like a simple thing to have a large rectangle for the runway, a bitmap with a dashed line down the center for the texture.
But the texture mapping will not be perspective (well, not without a lot of complicated math involved).
Foley and van Dam say — break the runway into a dozen or so "short" runways laid end to end (subdivide). The bitmap texture for each is just a single short stripe. Now because you have a bunch of these quads end to end, it is as if there is a longer runway and a series of dashed lines. And while each individual piece of the runway (with a single stripe), is not in itself truly perspective, each quad as it gets farther from you is nonetheless accounting for perspective — is smaller, more foreshortened.
It was avoided in the Foley and Van Dam days because it requires a division per rasterized pixel, which was very slow in the late 80s.
Meanwhile.. drawing 512 subdivisions for a single textured quad.
It's a cute trick, certainly, but ask this thing to draw anything more than a couple thousand elements and I bet it's going to roll over very quickly.
Just use webgl where perspective-correct texture mapping is built into the hardware.
I was on the original SVG team at Adobe back in '00 and built some of the first public demos that used the technology. This kind of 3d work was some of the first stuff I tried to do and found it similarly lacking due to the lack of possible transforms. I had some workarounds of my own.
One demo had a 3d stack of floors in a building for a map. It used an isometric perspective (one where parallel lines never converge) and worked pretty well. That is pretty easy and can be accomplished with rotation and scaling transforms.
The other was a 3d molecule viewer where you could click and drag around to view the structure. This one basically used SVG as a canvas with x and y coordinates for drawing. All of the 3d movement was done in Javascript, computing x and y coordinates and updating shapes in the SVG DOM. Styles were used to handle single / double / triple bonds, and separate groups were used to layer everything for legibility.
"Everything else" would be a pluggable execution runtime that are distributed as browser plugins: [WASM Engine, JVM engine, SPIR-V Engine, BEAM Engine, etc] with SVG as the only display tech. The last thing we'd define is an interrupt and event model for system and user interactions.
I later optimized the hotpath and it is significantly faster (still miles behind webgl/webgpu obviously). You can try yourself if you scroll alll the way to the veeeerrrry bottom here [3].
[1]: https://github.com/calmm-js/partial.lenses [2]: https://ramdajs.com/ [3]: https://static.laszlokorte.de/svatom/
One possibly uncalled-for piece of feedback: is that USB-C connection finished, and is it complying with the various detection resistor requirements for the CCx pins? It seemed very bare and empty, I was expecting some Rd network to make the upstream host able to identify the device. Sorry if I'm missing the obvious, I'm not an electronics engineer.
See [1] for instance.
[1]: https://medium.com/@leung.benson/how-to-design-a-proper-usb-...
[2] https://webgl2fundamentals.org/webgl/lessons/resources/webgl...
For a 400x400 SVG with 6 surfaces and 64 subdivisions your file size is only 10x smaller than an uncompressed bitmap. Your SVG should scale linearly with number of objects and be constant with resolution, while an image would scale with the resolution (quite favorably if compressed) and be constant with the number of objects. I'd be interested to know the size of the example at the top of the article.
Also you already have the math to transform points!
> I don’t have to rasterize my SVGs the represent the top of my board.
Ahhhhhh. This clears it all up!
SVG elements are DOM elements after all, and too many DOM elements will cause browser performance issues. I know this the hard way, after adding a few hundred SVG <path> elements with a few hundred <div> elements in a React-based interactive web application, I ended up needing to move to a canvas solution instead, which works amazingly well.
I really hope you have all that figured out, because I don't think it's going to work well using SVG to render complex circuit boards. But maybe your product is only working with very simple circuit boards?
(Firefox’s implementation does still suffer from one long-standing bug which means you want to make sure your viewbox unit is larger than one device pixel, but that’s normally not hard to achieve. https://oreillymedia.github.io/Using_SVG/extras/ch11-3d.html... shows what it’s about. I don’t really understand why that problem isn’t fixed yet; what I presume is the underlying issue affects some HTML constructs too when you scale things up, and surely it’s not that rare? I know I found one such problem a decade ago (and, being in HTML, it couldn’t be worked around like you can with SVG). They’ve improved things a bit, but not entirely.)
Sadly, no one else seemed all that interested in making 3D transformations work properly in SVG content.
Why did you feel you had to do this with SVG?