Skip to main content

Is the 3DMark RTX benchmark actually useful?

Unless you have been living under a rock, you probably know that Nvidia's "next-gen" line of graphics card, the RTX-20xx series, has hardware support for raytracing, which gets us one step closer to the holy grail of computer graphics that has been almost insurmountable since the 1970's: Real-time raytracing used in actual video games for more accurate reflections, refractions and shadows (as well as, possibly, some other graphical effects, like better real-time ambient occlusion and global illumination).

The new cards have had a slightly rocky start in that even after a couple of months having been available, only one game so far has raytracing support, and only a half dozen upcoming games have announced possible support. And, moreover, even in that one game (ie. Battlefield 5), turning raytracing on plummeted framerates, from something like 80-100 frames per second to less than 40 and even 30 at worst, with all other settings maxed out, using an RTX 2080 Ti.

On a positive note, however, a couple of quick optimizations both to the Nvidia display drivers and the game itself have made the situation much better with it, with framerates now sitting much more comfortably above 60 frames per second even with full-quality raytracing, without any obviously visible loss of visual quality. So after a few initial stumbling blocks it turns out that maybe there is some hope for real-time hardware raytracing after all.

And this is, in fact, a segue to the actual point of this blog post.

3DMark has for many years been the de facto standard and by far the most popular synthetic benchmark for gaming PCs. Not surprisingly, the developers have rushed to add a raytracing benchmark to the benchmark collection offered by the software (and according to announcements will be available some time next month, as of writing this.)

And that's the key factor: Rushed.

As we saw with the case of Battlefield 5, the hardware raytracing technology is still in its early infancy, with new software optimizations and development being constantly made to the graphics drivers and the games. The way to use hardware raytracing has yet to fully mature, and both Nvidia themselves, as well as game developers, are still researching the best ways to take advantage of the hardware raytracing capability, for optimal performance and image quality. This is software technology that's changing and evolving by the month.

What I'm wondering is that given how rushed the RTX benchmark in 3DMark is, how optimal it actually is, and how reflective (hah!) it is of actual future video games. With almost certainty games will develop and find new techniques to use the raytracing cores to their maximum. Ways that the 3DMark benchmark most probably isn't using.

Therefore I think that if you use 3DMark to benchmark RTX cards, the result may not be very indicative of actual performance in future video games. It may even be that the way that raytracing is used in the benchmark is very inefficient and doesn't scale at all (eg. to future cards having more raytracing cores) in the same way as raytracing in future games might. For example, if a future RTX card has double the raytracing cores, the benchmark could perhaps be sped up by something like 75%, while actual games may speed up by 90%, or something along those lines. The benchmark, thus, wouldn't be very indicative of the actual benefit of the new card.

And I don't think they can just go and change the benchmark to implement future optimizations into it, because that would invalidate existing scores.

(Although, I suppose, the correct answer to this is that if significant improvements are discovered in the near future, they will simply create a new separate raytracing benchmark using those, and leave this first one as it is, as a relic.)

Comments