Skip to main content

Nvidia, the RTX 20 series, and using customers as testers

Nvidia has been one of the major and few innovators in the field of consumer-grade hardware-accelerated real-time graphics, traditionally and still used mainly for video games, but in increasing manner for other (mostly number-crunching) applications as well (such as video rendering, AI-learning, cryptomining, and so on.)

Nvidia's flagship line of products has been for quite a long time the GeForce GTX series of cards. However, the development of this line of cards has been, in some way, slightly stale for some time. Arguably the last major innovation in this line was the GTX 600 series, introduced in 2012, which introduced a significant amount of new hardware features (such as G-sync, hardware support for video capturing, and 4k resolution support, among others.) Since then, the improvements have been mostly on making the cards faster and have more RAM.

Now Nvidia is trying to take the next huge leap forward with their new RTX 20 series of cards, and actually introduce a completely new feature that can potentially benefit video games significantly: Hardware support for bone fide real-time raytracing.

Raytracing itself, as a rendering technique, has existed for a very long time (being almost as old as computers capable of showing and calculating images). Raytracing can produce stunningly photorealistic images, with extremely accurate reflections (including recursive reflection between surfaces), refractions and shadows, things that are very hard, if not even impossible, to do with scanline rendering techniques (which is what video games use for real-time rendering), at least with any sort of accuracy. Its problem has always been that it's way too slow for real-time rendering (meaning at least 30 or so images per second). Real-time raytracing has always been the holy grail of computer graphics. (CPUs are nowadays so fast that they can do real-time raytracing of relatively simple scenes, but these are way too simple for video games.)

Nvidia is now trying to solve that problem with their new line of cards, which now have hardware support for raytracing, which can ostensibly calculate rays orders of magnitude faster than any CPU, or any previous graphics card.

It's still not fast enough to raytrace an entire screenful (eg. 1920x1020 pixels) in a complex scene with tons of geometry and reflections and refractions, but it can already be used to render parts of the scene (such as the reflections on the surface of objects) in real time. I'm certain that game and game engine developers will find a way to optimize the use of this hardware to its maximum, for maximum effect.

But, the thing is, it appears that the new RTX 20 series of cards is not significantly faster at rendering existing games compared to the previous generation, ie. the GTX 10 series. It seems that Nvidia has concentrated all their efforts on the raytracing and tensor calculation features, neglecting the regular scanline rendering features.

Pixel and texture fill rates are not the only things that affect rendering speed (there's a myriad of other things as well), but they can be used as a rough estimate, until actual benchmarks are performed.

The jump in these fill rates between the GTX 900 and the GTX 10 series is quite significant, which is to be expected. However, the increase from the GTX 10 to the RTX 20 series in these numbers is very moderate in comparison. These are the pixel fill rates (gigapixels/s) and texture fill rates (gigatexels/s) of equivalent cards in each of the three series:

GTX 970: 55/109, GTX 1070: 96/181, RTX 2070: 90/203
GTX 980: 72/144, GTX 1080: 103/257, RTX 2080: 97/279
GTX 980 Ti: 96/176, GTX 1080 Ti: 130/331, RTX 2080 Ti: 119/367

Interestingly, pixel fill rates are even lower on the RTX cards than on the previous generation. Texture fill rates are only moderately higher (and proportionally not even near the jump from the GTX 900 series to the GTX 10 series.)

Some early reports suggest that, indeed, the RTX cards are only very mildly faster than the equivalent GTX 10 series cards with existing games.

But the thing is, the RTX 20 cards are more expensive at launch than the equivalent GTX 10 cards were at their launch. In some cases quite significantly so. Many people have criticized this because if people buy these new RTX cards, they will not get anywhere near the benefit for existing (and many upcoming) games, at that price point. Nowadays GTX 10 cards have become cheaper, and offer much better value for the money on that front.

It appears to me that Nvidia is testing the waters with the RTX cards. It's still completely in the air whether hardware-accelerated raytracing will become an actual thing in video games, or whether it will be relegated to a temporary curiosity. Personally I predict it will most probably become a thing, probably. However, these cards are more or less an experiment. An experiment to see if the technique will catch on or not. A quite expensive experiment from the perspective of the customers. And one that's essentially made at the expense of traditional rendering speed for existing and many upcoming games. (Not at the expense of them in the sense that they will become slower, but in the sense that their progress in terms of speed has been effectively stalled for years to come.)

I predict that, perhaps, Nvidia will in the near future create some kind of GTX 2050, or perhaps even a GTX 2060 card, which uses all the same technologies as the RTX cards but without the raytracing support, and these cards will be much cheaper (in the mid-to-low range.) Especially if the actual RTX cards end up not selling that well.

(Another possibility is that, if the RTX 20 cards turn out to be a commercial failure, they will relatively quickly move to either an RTX 30 or GTX 30 series, having learned the lessons from this experiment.)

Comments