Now that there are a few games that finally support DLSS, someone decided to try it, see how it looked, and make a video:
The first problem with DLSS is that it harshly restricts your settings options. While DLSS offers a considerable frame rate increase in some cases, you could get a larger increase in frame rates by selectively turning down other options. Turning off ray tracing alone gives a much larger boost to frame rates than DLSS. You can't mix turning down other options with DLSS, however.
But let's ignore that. Let's look only at the particular settings that DLSS is trained on. Let's put DLSS in the best possible light, at least in real games as opposed to canned benchmarks.
Even there, DLSS is still terrible. It's markedly inferior to simply rendering the game at a lower resolution and then upscaling, as games have been doing for decades. To demonstrate this, the person making the video linked decided to tinker with reduced resolutions until he could find just how far you have to reduce the resolution to match the performance gain of DLSS. Then compare the image quality from upscaling at that resolution to that of DLSS.
Yes, there are video compression artifacts in the video, though the creator combats them by zooming in considerably. But the image quality difference between traditional upscaling and DLSS is so large as to overwhelm the video compression artifacts. DLSS doesn't just look worse; it looks a lot worse.
It's not that DLSS does worse than simple upscaling from a given image. It probably does a little better there. But it's vastly more expensive to compute than simple upscaling. Rather than using all of that computational power on DLSS, it could have been used to render the game at a higher resolution and then not have to upscale as far. That allows simple upscaling to have a lot more samples to work with. It can preserve a lot more details because it has them in the first place. DLSS can't recover details that simply aren't present in the samples that it is given to work with.
Nvidia reacted to this by saying that DLSS will improve with time. And it probably will, at least for a while. But the current chasm between DLSS and upscaling is so huge that it's extremely implausible that DLSS will even catch up, much less be good enough to offer a real advantage.
AMD reacted to this by canceling work on their rumored competitor to DLSS. If DLSS is markedly worse than what AMD can already do (and what Nvidia can also do without DLSS), then why bother?
Let's also not forget that this is the best case scenario for DLSS. It's not just that the measurements respect the particular combinations of graphics settings mentioned above. It's also that Turing GPUs dedicate a lot of die space to a DLSS ASIC. Okay, so those tensor cores are really a machine learning ASIC, but that's not a consumer use. The point of DLSS was to create a consumer use for machine learning, and it failed spectacularly.
Now that they've demonstrated that it's useless to consumers, do you think that future Nvidia GPU architectures will waste all that die space? For pure compute parts like their GV100 chip (Tesla V100, Titan V), maybe they will. But don't expect to see it in GeForce cards ever again. DLSS development will be dead no later than the day their next generation of GPUs launches, and possibly much sooner.
A huge waste of die space is fatal to the architecture. It's not just a waste of money in production. Nvidia could have used that die space to offer more compute units and more of everything else. That would have offered an across the board performance increase in just about every GPU-limited situation ever. They could have done that instead of DLSS. Think they'd like a redo on that choice?
Any further questions on why Nvidia was so hesitant to show off uncompressed screenshots comparing DLSS to alternatives? Hope you didn't buy a Turing card for the sake of DLSS, though I suspect that the only gamers to do so are Nvidia fanboys who would have bought Nvidia even if DLSS never existed. If you bought it for the sake of ray tracing, or better yet, for the sake of performance in games that don't use the new features at all, that's much more reasonable. Real-time ray tracing probably has a future. DLSS doesn't.