Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!

The truth about high resolutions, DLSS, and variable rate shading

QuizzicalQuizzical Member LegendaryPosts: 25,492
The fundamental fact about 3D graphics that you need to remember is that there isn't any canonical, right way to do it.  The goal is to make something that looks good and performs well.  There's a whole lot of fakery going on under the hood, but if it looks good and performs well, that's a sufficient justification for it.

The problem is that what looks good is a matter of opinion.  On the performance side, average frame rates are more objective, but how much you value consistency versus high averages is also partially a matter of opinion.  There are a lot of graphical effects that will certainly make a game run slower, but people can reasonably disagree on how much better they make the game look, and whether it justifies the performance hit.

My own personal opinion is that rasterized shadows look awful, so I usually turn them off if I can.  Depth of field brings a performance hit to make games look worse, not better, so that gets turned off, too.  Ambient occlusion also brings a performance hit, and makes games look different, but not really better or worse, so I also turn that off.  But having enough samples to preserve the intended detail and avoid artifacting from interpolation is hugely important to how good a game looks.  Other people may reasonably disagree with some or all of those opinions, and that's fine.  They can set their graphical settings differently from how I do.

As 4K monitors have become more common, there have been people who argue that the higher resolution doesn't matter, as you can't tell the difference.  Depending on how large the monitor is, how far away from it you sit, and how good your vision is, that argument has more merit in some situations than others.  Nvidia has introduced DLSS, arguing that rendering at a lower resolution and using their algorithm to upscale it to 4K is good enough, while offering the performance benefits of rendering at the lower resolution.  While it has gotten less attention, variable rate shading does something similar, allowing games to increase performance by declining to generate a new sample for every pixel of every frame, the way that 3D graphics traditionally has.

All three of these are making effectively the same argument.  They aren't arguing that a 2560x1440 resolution looks just as good as 3840x2160, or that DLSS upscaling to 4K looks just as good as native 4K, or that variable rate shading looks just as good as getting a new sample for every pixel of every frame.  Rather, they're arguing that the difference in image quality is small enough that the large difference in performance justifies a small hit to image quality.  And that's a plausible argument, though again, whether you buy it is a matter of opinion.

Well, usually they're not arguing that it's just as good.  Yesterday, this site ran a remarkably awful review of a video card (MSI GeForce RTX 3090 Suprim) that argued that DLSS upscaling gives better image quality than native 4K.  That review has been mercifully removed from the site.

On another note, I'd also like to use DLSS somewhat improperly to refer to the process of rendering a game at a lower resolution and then using some fancy algorithm to upscale it to a higher resolution.  That can be done well or badly, and even the fans of Nvidia's DLSS 2.0 seem to mostly agree that DLSS 1.0 was garbage.  And my usage of "DLSS" also includes AMD's upcoming FidelityFX Super Resolution and any other DLSS-like algorithms that may arise in the future.

It's not a coincidence that DLSS and variable rate shading arrived as the transition to 4K monitors was underway.  There's no reason why you have used something much like DLSS a decade ago to render a game at 1280x720 and upscale that to 1920x1080.  The reason they didn't is that it would have looked terrible, and far inferior to rendering the game at native 1920x1080.  For that matter, they could use DLSS today to render a game at 1280x720 and upscale that all the way to 4K, but it will look terrible if you do.

And no, DLSS wasn't enabled by Turing's tensor cores.  That's marketing garbage and an attempt at convincing gamers that they should pay extra for some stupid chunk of silicon that they don't have any real use for.  Nvidia put the tensor cores in because they wanted to sell the same GPUs for compute, such as in the Tesla T4, and some machine learning algorithms that they wanted to sell such cards for benefit tremendously from the use of tensor cores.  Convincing gamers that this was something that they should pay extra for was a marketing problem, and DLSS "requiring" tensor cores was the marketing solution that they came up with.

This is readily demonstrated by some back of the envelope arithmetic.  Let's suppose that you're running a game at 3840x2160 and 144 Hz, and let's suppose that computing each color of each pixel from DLSS involves taking a linear combination of 100 other samples.  In that case, you're looking at about 0.7 TFLOPS at half-precision to do the computations for DLSS, or less than 1% of what the GeForce RTX 3090 is rated at using only packed half math and not tensor operations.  Or for another comparison, less than 3% of what the older Radeon RX Vega 64 can do without having tensor cores at all.  And that's probably an overestimate of the brute computational work involved.

Now, there is a lot of other stuff to do as part of DLSS.  At least as used in Nvidia's DLSS 2.0 algorithm, it requires computing direction vectors for each pixel that is rendered, and storing them somewhere.  It requires loading the right data into the right caches at the right time, which is likely to be rather complicated.  But those portions of the work do not and cannot use tensor cores at all.

In objective terms, the image quality loss of rendering at 1280x720 and upscaling to 1920x1080 is the same as of rendering at 2560x1440 and upscaling to 3840x2160 ("4K").  But they're not perceived the same way by the human eyes.  The smaller the pixels get, the less important each pixel is, and the more acceptable it becomes for some pixels to be a little wrong.  It has similarly been argued that if individual pixels are small enough that you can't see them, do you really need anti-aliasing?  Individual pixels were very visible at the NES's resolution of 256x240, but at 4K, you have to look awfully closely to see the individual pixels along the edge of a curve.
[Deleted User]

Comments

  • QuizzicalQuizzical Member LegendaryPosts: 25,492
    So there isn't any real doubt that using a 2560x1440 monitor isn't as good of image quality as native 3840x2160.  Nor is rendering at 2560x1440 and using DLSS to upscale to 3840x2160 as good as native 3840x2160.  Neither is nominally rendering at 3840x2160 but using variable rate shading to only do as much work as for 2560x1440 as good as doing the full work for normal 3840x2160.

    That doesn't mean that those other techniques are all garbage, however.  If you had infinite GPU performance, then of course you wouldn't use them.  You'd render at the native resolution of your monitors with some high degree of SSAA.  You'd definitely go for full ray-tracing of everything.  In the trade-offs between performance and image quality, you'd make the same sort of extreme image quality choices that Pixar does when doing the final renders of their movies.

    But you don't have infinite GPU performance.  If you try to use the same extreme image quality settings as Pixar, you'd get about the performance that they do.  It commonly takes them more than an hour to render each frame.  For the final pass of a movie on a $100 million budget that only needs to be rendered once, can be done in parallel by many servers, and will be viewed by millions of people, you can take the time and buy the hardware to do that.  For playing a game in real-time, you can't.

    The real question is what to do with the budget that you have available, or later, the hardware that you have already purchased.  The choices may be something like:

    1)  Render the game at 2560x1440 and display it on a 2560x1440 monitor.
    2)  Render the game at 2560x1440 and use DLSS to upscale it to 3840x2160.
    3)  Use variable rate shading to render the game at 3840x2160 while only doing about as much work as natively rendering the game at 2560x1440.
    4)  Turn off some other graphical settings to make rendering the game natively at 3840x2160 offer acceptable performance.
    5)  Go for maximum image quality at 3840x2160 and just accept that your frame rate is rather low.

    The precise options will vary by game, of course, and there can be different combinations of things available, too.  Different games will offer different frame rates on the same hardware even with the "same" choice made above, and you can make independent choices for different games.  In some games, rendering at the maximum resolution and the other settings you like will give plenty good enough performance so that there's no reason to use DLSS or variable rate shading at all, even if they're offered.  But in more demanding games, you'll have choices to make.

    So what should you use?  Whichever option looks best to you.  You're the one who has to look at the screen when you're playing whatever game it is that you're playing, so your opinion is the one that matters.

    Just don't try to max all settings and then complain that the game runs poorly if you go out of your way to make it run poorly.  Please don't do that.  Don't be an idiot.  Use graphical settings responsibly.
    [Deleted User]
  • RidelynnRidelynn Member EpicPosts: 7,383
    edited December 2020
    Quite a read. Good info though.

    tl;dr

    DLSS is marking fluff (and I totally agree)

    Just don't try to max all settings and then complain that the game runs poorly if you go out of your way to make it run poorly.  Please don't do that.  Don't be an idiot.  Use graphical settings responsibly.


    [Deleted User]
  • QuizzicalQuizzical Member LegendaryPosts: 25,492
    My claim isn't that DLSS or something like it is intrinsically worthless.  It could well have some value to some people in some situations when running the game natively at a higher resolution isn't viable.

    What I called marketing garbage is the narrower claim that DLSS proves the value of tensor cores.  That's wrong even if you think DLSS is the greatest graphical feature ever.  You can do DLSS or something much like it just fine without tensor cores.
    [Deleted User]
Sign In or Register to comment.