Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!

Apparently DLSS is as bad as we thought it would be

QuizzicalQuizzical Member LegendaryPosts: 25,355
Now that there are a few games that finally support DLSS, someone decided to try it, see how it looked, and make a video:



The first problem with DLSS is that it harshly restricts your settings options.  While DLSS offers a considerable frame rate increase in some cases, you could get a larger increase in frame rates by selectively turning down other options.  Turning off ray tracing alone gives a much larger boost to frame rates than DLSS.  You can't mix turning down other options with DLSS, however.

But let's ignore that.  Let's look only at the particular settings that DLSS is trained on.  Let's put DLSS in the best possible light, at least in real games as opposed to canned benchmarks.

Even there, DLSS is still terrible.  It's markedly inferior to simply rendering the game at a lower resolution and then upscaling, as games have been doing for decades.  To demonstrate this, the person making the video linked decided to tinker with reduced resolutions until he could find just how far you have to reduce the resolution to match the performance gain of DLSS.  Then compare the image quality from upscaling at that resolution to that of DLSS.

Yes, there are video compression artifacts in the video, though the creator combats them by zooming in considerably.  But the image quality difference between traditional upscaling and DLSS is so large as to overwhelm the video compression artifacts.  DLSS doesn't just look worse; it looks a lot worse.

It's not that DLSS does worse than simple upscaling from a given image.  It probably does a little better there.  But it's vastly more expensive to compute than simple upscaling.  Rather than using all of that computational power on DLSS, it could have been used to render the game at a higher resolution and then not have to upscale as far.  That allows simple upscaling to have a lot more samples to work with.  It can preserve a lot more details because it has them in the first place.  DLSS can't recover details that simply aren't present in the samples that it is given to work with.

Nvidia reacted to this by saying that DLSS will improve with time.  And it probably will, at least for a while.  But the current chasm between DLSS and upscaling is so huge that it's extremely implausible that DLSS will even catch up, much less be good enough to offer a real advantage.

AMD reacted to this by canceling work on their rumored competitor to DLSS.  If DLSS is markedly worse than what AMD can already do (and what Nvidia can also do without DLSS), then why bother?

Let's also not forget that this is the best case scenario for DLSS.  It's not just that the measurements respect the particular combinations of graphics settings mentioned above.  It's also that Turing GPUs dedicate a lot of die space to a DLSS ASIC.  Okay, so those tensor cores are really a machine learning ASIC, but that's not a consumer use.  The point of DLSS was to create a consumer use for machine learning, and it failed spectacularly.

Now that they've demonstrated that it's useless to consumers, do you think that future Nvidia GPU architectures will waste all that die space?  For pure compute parts like their GV100 chip (Tesla V100, Titan V), maybe they will.  But don't expect to see it in GeForce cards ever again.  DLSS development will be dead no later than the day their next generation of GPUs launches, and possibly much sooner.

A huge waste of die space is fatal to the architecture.  It's not just a waste of money in production.  Nvidia could have used that die space to offer more compute units and more of everything else.  That would have offered an across the board performance increase in just about every GPU-limited situation ever.  They could have done that instead of DLSS.  Think they'd like a redo on that choice?

Any further questions on why Nvidia was so hesitant to show off uncompressed screenshots comparing DLSS to alternatives?  Hope you didn't buy a Turing card for the sake of DLSS, though I suspect that the only gamers to do so are Nvidia fanboys who would have bought Nvidia even if DLSS never existed.  If you bought it for the sake of ray tracing, or better yet, for the sake of performance in games that don't use the new features at all, that's much more reasonable.  Real-time ray tracing probably has a future.  DLSS doesn't.
[Deleted User]OzmodanMendel

Comments

  • RidelynnRidelynn Member EpicPosts: 7,383
    I know on several forums folks were saying they were so excited for DLSS and that it was going to be “the thing” that made RT useable.

    No use saying I told you so, people who bought into it got other benefits at least - not a total loss.

    and it’s not like this is nVs only blunder or that AMD is immune to them (the promise of HBM, for instance)
    [Deleted User]
  • grndzrogrndzro Member UncommonPosts: 1,162
    Ridelynn said:

    and it’s not like this is nVs only blunder or that AMD is immune to them (the promise of HBM, for instance)
    HBM isn't really the problem though. The problem was cost. They are working on lower cost HBM, and HBM3 that will be viable for more products. HBM as a product is very high performance.
    Quizzical
  • QuizzicalQuizzical Member LegendaryPosts: 25,355
    t0nyd said:
    But would it be more expensive to have multiple SKUs? If they keep an advantage over AMD while keeping dlss, why change? I'm reminded of mining. Make one card that has multiple uses and hope one of those uses takes off to increase scarcity. If machine learning takes off, will it be the new mining?
    Multiple SKUs doesn't gain you anything unless the point is salvage parts.  The cost of production goes by the size of the die, and disabling part of the die after you've produced it doesn't reduce that cost.

    Multiple dies can gain you a lot, and may have been what you meant, as Nvidia could have put their tensor cores into just the top die and not the rest of them.  That makes a lot of sense to do, and is what both Nvidia and AMD have commonly done in the past with ECC memory and relatively fast double precision compute, among other things.

    The problem is that Nvidia made multiple dies and then put the tensor cores into all of them.  If they had only put tensor cores into GV100 and that's it, it would make sense.  But they also put them into TU102, TU104, and TU106.  That doesn't make sense to do at all.

    The thing about mining is that Nvidia and AMD didn't have to add anything at all to support mining in particular, beyond some relatively basic instructions that are needed for a lot of compute purposes, not just mining.  That's why the miners were able to buy a ton of Radeon and GeForce cards that had never been intended for mining.
  • RidelynnRidelynn Member EpicPosts: 7,383
    grndzro said:
    Ridelynn said:

    and it’s not like this is nVs only blunder or that AMD is immune to them (the promise of HBM, for instance)
    HBM isn't really the problem though. The problem was cost. They are working on lower cost HBM, and HBM3 that will be viable for more products. HBM as a product is very high performance.
    That is exactly the problem. One of the things HBM was supposed to do was lower cost. It may do that at some point but it hasn’t yet.
  • QuizzicalQuizzical Member LegendaryPosts: 25,355
    Ridelynn said:
    grndzro said:
    Ridelynn said:

    and it’s not like this is nVs only blunder or that AMD is immune to them (the promise of HBM, for instance)
    HBM isn't really the problem though. The problem was cost. They are working on lower cost HBM, and HBM3 that will be viable for more products. HBM as a product is very high performance.
    That is exactly the problem. One of the things HBM was supposed to do was lower cost. It may do that at some point but it hasn’t yet.
    HBM was never about lowering cost.  The interposer and the base die for each stack are extra costs that GDDR* simply doesn't have, and they're always going to cost more than the extra PCB traces of GDDR*.  Rather, HBM was always about higher throughput at lower power consumption, but comes at the expense of higher costs.  The hope is that they can reduce the price premium of HBM over time with wider adoption, which will allow it to be viable for $300 and eventually $200 products instead of limited to the high end only.
  • WizardryWizardry Member LegendaryPosts: 19,332
    He touched on the most important factor...REAL game play over settings and setups that can give the false illusion of greatness.

    His screen by screen comparison is not realistic because if you separated the two,i doubt anyone would see any difference at all.

    Actually his ENTIRE testing method is flawed.Sure he mentions a bad Anti lAliasing is not a good comparison but there is WAY more than a simple AA bashing.

    The actual game is what matters.He talks about FFXIV and Battlefield,except what he doesnèt tal kabout is how many objects are in screen,the LOD at which they start to fade,the quality of background images/animations,there is just way more to actual gaming than standing in one spot and talking about BAttlefield's up close sharpness.

    Overall imo he is just a stat junkie and not a true gamer that understands everything that matters in a screen than just simple numbers.

    Never forget 3 mile Island and never trust a government official or company spokesman.

  • WizardryWizardry Member LegendaryPosts: 19,332
    edited February 2019
    Even at the end of his video he is still showing up close and with no actual gameplay/animations.
    As for restrictions,well idk quite possible but then again would people actually notice the difference if not side by side comparisons?


    This video touches on lighting and how areas that shouldn't have light DO NOT have light but someone could sit there say,well the non RTX area looks better because it is actually lighted.This is why it matters from what perspective someone is talking about visual quality,you ant more realism or more lighted area?

    If i didn't make sense,it is like if you make an area that SHOULD be dark all of a sudden lighted,it will obviously better but it will not be realistic.


    Never forget 3 mile Island and never trust a government official or company spokesman.

  • QuizzicalQuizzical Member LegendaryPosts: 25,355
    Wizardry said:
    He touched on the most important factor...REAL game play over settings and setups that can give the false illusion of greatness.

    His screen by screen comparison is not realistic because if you separated the two,i doubt anyone would see any difference at all.

    Actually his ENTIRE testing method is flawed.Sure he mentions a bad Anti lAliasing is not a good comparison but there is WAY more than a simple AA bashing.

    The actual game is what matters.He talks about FFXIV and Battlefield,except what he doesnèt tal kabout is how many objects are in screen,the LOD at which they start to fade,the quality of background images/animations,there is just way more to actual gaming than standing in one spot and talking about BAttlefield's up close sharpness.

    Overall imo he is just a stat junkie and not a true gamer that understands everything that matters in a screen than just simple numbers.
    None of that has any bearing whatsoever on the issue at hand:  how DLSS compares to traditional upscaling.  That's what the video was about.  If you're not interested in that comparison, then this isn't the thread for you.

    It isn't Nvidia's job to implement interesting game mechanics, just like it isn't your ISP's job to do so.  The job of such companies that play a supporting role is merely to make sure that whatever game developers decide to implement, it works as well as possible.  If you don't care about such things, then have you considered staying out of the hardware forum?
    tweedledumb99Ozmodan
Sign In or Register to comment.