> If you are in possession of any of:
> NVIDIA RIVA 128 Programmers’ Reference Manual
> NVIDIA RIVA 128 Customer Evaluation Kit (we have the NV1 CEK version 1.22)
> NVIDIA RIVA 128 Turnkey Manufacturing Package
> Source code (drivers, VBIOS, etc) related to the NVIDIA RIVA 128
> Any similar documents, excluding the well-known datasheet, with technical information about a GPU going by the name “NV3”, “STG-3000”, “RIVA 128”, “NV3T”, “RIVA 128 Turbo” (an early name for the ZX) or “RIVA 128 ZX”
> Any document, code, or materials relating to a graphics card by NVIDIA, in association with Sega, Helios Semiconductor or SGS-Thomson (now STMicroelectronics) codenamed “Mutara”, “Mutara V08”, or “NV2”, or relating to a cancelled Sega console codenamed “V08”
> Any documentation relating to RIVA TNT
> Any NVIDIA SDK version that is not 0.81 or 0.83
I feel this. A lot of information has been lost.
Or it was not available, in the first place. I had an Nvidia TNT 2 M64. It had terrible linux support because of missing documentation.
"Weitek Oral History Panel " https://archive.computerhistory.org/resources/access/text/20...
>Roach: ... we did make some efforts to get a VGA core. We actually licensed it from Unisys, who had it internally, so they didn't mind licensing it to a merchant chip vendor, and we did that, and I think Barry deserves credit for doing the best he could to make something out of all of this, and he gave it kind of one more generation, one and a half generations, but the fundamental approach was still, coming from a high cost core, trying to go down, whereas the other guys had it much better figured out from low cost to coming up.
Whole panel is depressing to read. They had two strong products, x86 copro and SUN design win copro. Rest was losing money or not selling well. As soon as SUN moved on company started having serious problems. They were convinced Weitek was in high end graphics, but every Weitek P9000/9100 benchmark I can find puts them in the middle of common PC vendor pack losing to much cheaper chips/cards. 5186/5286 vga cores performed pretty terribly (8bit video ram access when combined with P9000?), and so did the one integrated into P9100 delivering half the performance of low end Trident PCI (not liking Doom 8bit writes pattern?). Stupid marketing gimmicks, feng shui, wasting money on bad calls like speech recognition and chasing high margin products in a market racing to commoditize everything. It looks like they might have lost best people early on with Edmund Sun starting C-Cube going into video acceleration, sold for >$2 Billion in 2000, and Chi-Shin Wang 8x8 doing conferencing and still holding on with ~2000 employees and almost a $1B revenue.
I was very convinced at that time that 3dfx didn't have a good roadmap and Nvidia would prevail based on their professionalism and superior ability to design silicon.
Funny to read that in 2025 when all new PC GPUs still provide some form of VGA backwards compatibility with no plans to remove it.
Not sure if this is the same thing I had, but on my Riva128 the alpha blending wasn't properly implemented. I distinctly recall playing Unreal Tournament and when I fired the rocket launcher there were big black squared with a smoke texture on them slowly rotating :D couldn't see where I was shooting :D
It simply means that each newly rendered polygon’s RGB values are added together with the pixel values already in the frame buffer. It’s good for lighting effects (although not a very realistic simulation of light’s behavior unless your frame buffer is linear light rather than gamma corrected, but that effectively requires floating point RGB which wasn’t available on gaming cards until 2003).
Accumulating into an 8-bpc buffer will quickly artifact, so whether it’s acceptable to only do the blend operation in linear light (accumulating into a gamma-corrected frame buffer) depends on how many passes you’re rendering.
It really would be cool if someone could get a sitdown with Jensen to reminisce about the Riva 128 period.
Who else bought NVDA back in '99?
Which makes it, IMHO, a weird target to try to emulate. NV2 was a real product and sold some units, but it's otherwise more or less forgotten. Like, if you were deciding on a system from the early 70's to research/emulate, would you pick the Data General Nova or the PDP-11?
In 1997 as long as game started at all and you could more or less see whats going on it was considered ok and probably not a scam. It was a time of "accelerators" like Matrox with no texturing support, S3 running slower than in fully software mode, with most vendors missing crucial blending modes and filtering.
vlaskcz has a great Youtube series called "Worst Game Graphics Cards" and its pretty much every single vendor that isnt 3dfx up to 1998. https://www.youtube.com/watch?v=A0ljjj4LTDc&list=PLOeoPVvEK8...
Again, a year later the TNT changed things for NVIDIA (and the Geforce 256 a year after that changed everything). But the 128 was forgettable in hindsight.
I think this choice of terminology reflects both the era in which it was chosen (OOP was a huge trend back then), and the mindset of those who worked on the architecture (software-oriented). In contrast, Intel calls them commands/instructions/opcodes, as did the old 8514/A, arguably the one that started it all.
A specialized hardware accelerator for the manner by which Windows 95’s GDI (and its DIB Engine?) renders text.
Drawing text (from bitmap font data) is a very common 2D accelerator feature.
The Sega Saturn released in November 1994, with one of the most mind boggling bad hardware designs ever committed to a console. The thing had two CPUs, and unlike every console or 3d rendering machine to come later, actually rendered quads rather than tris. This is because you can more easily render lots of sprites for 2d games with quads (!!!). It was allegedly extremely difficult to program for such that its complexity stymied emulation for years after its release. I also read that Sega (which is actually a US company) had some sort of weird dynamic with its Japanese division such that the Japanese side of the company would design and ship hardware without consultation from the American side. Allegedly, the creator of Sonic the Hedgehog (Yuji Naka; who is currently in prison for securities fraud) would not pass the 3d engine used to build "Sonic Team's" first 3d game to the American team what was supposed to develop the main 3d Sonic game for the Saturn, and the main programmer for the Sonic 3d game engine in the US (Ofer Alon, who went to to found the company behind the 3d modeling software Z-brush) could not get a 3d sonic game to run on the Saturn because he tried writing the engine in the "slow" language "C", rather than 'ol fashioned assembly like Naka's team.
Whelp, that was my knowledge dump on 90s Sega!
Quads vs Triangles was kind of an open question as of 1993, when work on the NV1 started. Triangles are simpler, but Quads allow for a neat trick where you can do forwards texture mapping and get a much better approximation of perspective correct texturing, than you can with triangles.
Nvidia went all in on this approach. Not only did they support 4 control point quads, but the NV1 can render 9 control point quadratic patches. These quadratic patches not only provide a really good approximation of a perspective correct textured quad, but can represent textured curved surfaces in 3d space.
Quad-based approximations are much cheaper to implement in hardware than proper perspective correct texturing, which requires an expensive division operation per pixel. And the forwards texturing approach additional benefits with optimal memory access patterns for textures. The approach seems like a win in this era of limited hardware.
The problem is that forwards texture mapping sucks for 3D artists. Artists are fine with quads, they still use them today, but forwards texture mapping is very inflexible. Inverse texture mapping (UV coords) allows you to simply drape a texture across a model with UV coordinates. Forwards texture mapping requires careful planning to get good results, you essentially need to draw the texture first and build your model out of textured quads. Many Sega Saturn games rely on automated conversion from inverse textured models.
By 1996, you could just add a divider to your hardware and get proper perspective corrected inverse texturing, and there was no reason to do proper quad support, just split them into two triangles.
†I know that normalized device coordinates are still 3D, so "converting 3D to 2D" is technically wrong, but it conveys the right intuition.
I suspect that per-polygon mipmapping is actually calculated on the CPU. That would mean the actual hardware doesn't really implement mipmapping, it just implements switching between texture LODs on a per-triangle basis (probably that "M" coord in the 0x17 object).
Apparently later drivers from 1999 did actually implement per-pixel mipmapping, but I have a horrible feeling that's achieved by tessellating triangles along the change in LOD boundary, which must take quite a bit of CPU time.
In the modern era the 'compute shaders' part of that has become more dominant and lots of fixed function parts of the graphics pipeline have moved to software.
I'd love to see a more modern take on this if someone has run across it.
There's also https://envytools.readthedocs.io/en/latest/hw/intro.html
I remember some earlier titles that were locked to specific cards such as the Matrox ones and didn't support any other accelerators.