Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!

Nvidia announces Turing architecture

145679

Comments

  • VrikaVrika Member LegendaryPosts: 7,888
    Scot said:
    What does this have to do with Turing, apart from the fact the name will grab news coverage?
    If you mean the scientist, NVidia is in habit of naming their GPUs based on scientists. Maxwell, Pascal, Volta, etc.

    I think the GPUs have about as much to do with different scientists as Android versions have to do with different foods.
    gervaise1Quizzical[Deleted User]Scot
     
  • gervaise1gervaise1 Member EpicPosts: 6,919
    Quizzical said:
    gervaise1 said:
    Quizzical said:
    The alternative isn't really 12 nm today versus 7 nm today.  The alternative is 7 nm in the not very distant future.  Once you have full lineups on 7 nm, any high end cards on 12/14/16 nm are going to look like a pretty bad value.

    There's always something better coming, of course.  But it's all a question of how soon.  I'd bet on seeing a lot of 7 nm GPUs available within a year.  But I don't know if that means four months from now or eleven months from now.  Obviously, Turing on 12 nm would be dead on arrival if we knew that all the 7 nm GPUs were launching next week, but seem a lot more sensible if we knew that they were all delayed until 2021.
    Absolutely. As you say 7nm will have major advantages so pretty sure "consumer" Turing will be on 7nm at some point in the future. 

    For business the price performance seems to have improved. After all they have simply replaced one expensive card with another that offers better performance and, certainly in the case of the new Tesla, a lower power draw.

    As to when we will see consumer 7nm .... within a year sounds right. Some reports are suggesting end of 2019Q1, others mid-year and others 2020. Early days yet but the A12 has launched so things are moving in the right direction.
    The Tesla T4 is the successor to the Tesla P4, which is a 50 W card.  It's a much, much slower card than the Tesla V100.  That's not to say that it's bad; it looks like an improvement on the Tesla P4.  But it's not some revolutionary thing.
    Yes should have said that its NVidia's "replacement" for the P4 - under the Turing moniker.
  • gervaise1gervaise1 Member EpicPosts: 6,919
    Scot said:
    What does this have to do with Turing, apart from the fact the name will grab news coverage?
    From a marketing perspective it makes sense. NVidia are seeking to portray a gpu architecture that takes AI to whatever marketing level they can come up with plus, through DLSS, one that can "learn".

    And Turing is widely considered the father of (modern computer) "AI". If you haven't seen it check out the 2014 film "The Imitation Game". Excellent film in its own right (multiple awards and nominations, available on some streaming services). 
  • QuizzicalQuizzical Member LegendaryPosts: 25,355
    Assuming that the WCCFtech article that Ridelynn linked is accurate, DLSS 2x is a form of anti-aliasing.  The real question is how does it compare on performance hit and image quality to FXAA.

    Plain DLSS is really just rendering the game at a lower resolution and then upsampling.  Games have been able to do that for a long time, but it really means you're not running the game at 4K.  While that could increase frame rates, it has long been true of many games that you could get higher frame rates by running the game at lower settings.  If that's how you want to run a game, fine, but don't call it 4K.
  • QuizzicalQuizzical Member LegendaryPosts: 25,355
    gervaise1 said:
    Quizzical said:
    gervaise1 said:
    Quizzical said:
    The alternative isn't really 12 nm today versus 7 nm today.  The alternative is 7 nm in the not very distant future.  Once you have full lineups on 7 nm, any high end cards on 12/14/16 nm are going to look like a pretty bad value.

    There's always something better coming, of course.  But it's all a question of how soon.  I'd bet on seeing a lot of 7 nm GPUs available within a year.  But I don't know if that means four months from now or eleven months from now.  Obviously, Turing on 12 nm would be dead on arrival if we knew that all the 7 nm GPUs were launching next week, but seem a lot more sensible if we knew that they were all delayed until 2021.
    Absolutely. As you say 7nm will have major advantages so pretty sure "consumer" Turing will be on 7nm at some point in the future. 

    For business the price performance seems to have improved. After all they have simply replaced one expensive card with another that offers better performance and, certainly in the case of the new Tesla, a lower power draw.

    As to when we will see consumer 7nm .... within a year sounds right. Some reports are suggesting end of 2019Q1, others mid-year and others 2020. Early days yet but the A12 has launched so things are moving in the right direction.
    The Tesla T4 is the successor to the Tesla P4, which is a 50 W card.  It's a much, much slower card than the Tesla V100.  That's not to say that it's bad; it looks like an improvement on the Tesla P4.  But it's not some revolutionary thing.
    Yes should have said that its NVidia's "replacement" for the P4 - under the Turing moniker.
    Compute cards never get replaced entirely, but have to remain available for a long time.  If you've got an HPC with 512 of one GPU in it and one of them dies, you don't want to replace it by a newer, better model, as the asymmetric hardware would screw things up.  You want to replace it by one identical to the remaining 511.
    gervaise1
  • QuizzicalQuizzical Member LegendaryPosts: 25,355
    Ridelynn said:

    2080 is 40% faster than 1080
    2080ti is 47% faster than 1080ti
    That is very impressive, and we haven’t even seen DLSS yet. Now on to the 2070 results.

    So what's unusual here: we don't see the salvage parts. In the past, you'd see multiple SKUs from the same dies: Titan and Ti, x80 and x70 -- if a chip comes out pristine, it's the higher bin, if not, it can still make the lower bin. Occasionally we've seen interim tiers come out based on binning as well (x70Ti, for instance).
    It's claimed that the RTX 2080 and RTX 2080 Ti are both salvage parts.  If yields are good enough, it can make sense to have only one bin and make it a slight salvage part.  AMD did that with the PlayStation 4, for example:  20 GCN compute units, with two disabled.

    Furthermore, just because there aren't other bins in GeForce cards doesn't mean that there aren't other bins at all.  For example, GP106 has 10 compute units, and there are two GeForce bins of it: one with all 10 active and one with 9.  Both are called GeForce GTX 1060, to confuse you.  But there's another bin with 8 compute units active:  Quadro P2000.

    Just because you have a particular bin doesn't mean that you have to sell it worldwide.  AMD has at some points had a salvage part bin that was available only in China.  If you have a single customer who wants a large number of parts, you could make a custom bin just for that customer and not even announce it publicly.  Especially for low volume parts, if that customer wants 2% of the dies you build and is willing to take a salvage part, that might allow you to sell everything of an entire bin in one shot.
    Ridelynn
  • AmazingAveryAmazingAvery Age of Conan AdvocateMember UncommonPosts: 7,188
    edited September 2018
    Ridelynn said:

    2080 is 40% faster than 1080
    2080ti is 47% faster than 1080ti
    That is very impressive, and we haven’t even seen DLSS yet. Now on to the 2070 results.
    All still very much rumor.

    There's also the news about DLSS. It has two modes: DLSS, and DLSS2x. The first is supposedly much faster than TXAA, but that's because it actually renders at a lower resolution and then up-samples. The latter is much higher quality, but takes a larger (and anticipated) performance hit... If your running a "4K" test and DLSS is actually rendering lower and then scaling up, then that would account for significant performance gains... again - we need real benchmarks that tell the full story before we can really infer anything. (I'm sure it's not that simple and Quiz or Avery will come in an explain The Way It's Meant to Be Played, but for a layman I think it's at least close).

    Also, 2080Ti release also has just been pushed back at least a week.
    The numbers do come Nvidia's testing guide, but will not be too much longer for those every important independent reviews.

    Regarding DLSS:

    Quote from : http://www.pcgameshardware.de/Geforce-RTX-2080-Ti-Grafikkarte-267862/Specials/Turing-Technik-Infos-1264537/3/
    Nvidia indicates that DLSS uses a non-rocked resolution below the set one, which is supplemented by deep learning training with temporal component supersamples. You could say that this is a new form of upscaling that extrapolates not just existing information, but adds new ones. In addition, there should be DLSS 2X, a mode that works with the actual resolution and applies an equivalent of 64 × supersampling. Of course, this mode is slower than the native rendering.
    To simplify - 
    As you mentioned there are two modes of DLSS:
    DLSS 1X: uses lower resolution, upscales it perfectly to look like native resolution +TAA
    DLSS 2X: uses native resolution and super samples it at very low performance hit

    Basically on everything I've read about it you get native resolution quality at significantly more FPS - it is pretty killer feature.

    Here is the quote from the whitepaper (which in all in super interesting to read through)
    https://www.nvidia.com/content/dam/en-zz/Solutions/design-visualization/technologies/turing-architecture/NVIDIA-Turing-Architecture-Whitepaper.pdf

    The key to this result is the training process for DLSS, where it gets the opportunity to learn how to produce the desired output based on large numbers of super-high-quality examples. To train the network, we collect thousands of “ground truth” reference images rendered with the gold standard method for perfect image quality, 64x supersampling (64xSS). 64x supersampling means that instead of shading each pixel once, we shade at 64 different offsets within the pixel, and then combine the outputs, producing a resulting image with ideal detail and anti-aliasing quality. We also capture matching raw input images rendered normally. Next, we start training the DLSS network to match the 64xSS output frames, by going through each input, asking DLSS to produce an output, measuring the difference between its output and the 64xSS target, and adjusting the weights in the network based on the differences, through a process called back propagation. After many iterations, DLSS learns on its own to produce results that closely approximate the quality of 64xSS, while also learning to avoid the problems with blurring, disocclusion, and transparency that affect classical approaches like TAA.

    In addition to the DLSS capability described above, which is the standard DLSS mode, we provide a second mode, called DLSS 2X. In this case, DLSS input is rendered at the final target resolution and then combined by a larger DLSS network to produce an output image that approaches the level of the 64x super sample rendering - a result that would be impossible to achieve in real time by any traditional means. Figure 23 shows DLSS 2X mode in operation, providing image quality very close to the reference 64x super-sampled image.

    Simpler again - https://devblogs.nvidia.com/nvidia-turing-architecture-in-depth/

    Whereas TAA renders at the final target resolution and then combines frames, subtracting detail, DLSS allows faster rendering at a lower input resolution, and then infers a result that at target resolution is similar quality to the TAA result, but with half the shading work.

    Bottom line on DLSS - 

    NVIDIA super computer renders the images at 4K + 64X SSAA. Builds an AI training model around those perfect images. Then use that information to make the game run at less than 4K native resolution, but upscales it through AI to the quality of the 64X SSAA native res.

    We will have to wait for reviewers to get their hands on games that support DLSS to get the real story on the image quality differences between TAA/DLSS/DLSS 2X. And also to find out the performance hit for DLSS 2X. This is the checkerboarding option PC games need (a super significant leap above checkerboarding mind you), but with way better results. We will also have DLSS2X for the truly good stuff. I'm looking forward to independent reviews for the actual image quality comparison but the performance gain is potentially significant.



    TAA combines multiple frames whereas DLSS doesn't, so by "default" it will user a lower input sample count.

    1`.jpg 58.5K



  • QuizzicalQuizzical Member LegendaryPosts: 25,355
    Well of course DLSS is going to offer worse image quality than rendering at the intended resolution and then using some cheap post-processing anti-aliasing like FXAA.  When you render at a lower resolution and then upsample, that gives you worse image quality than rendering at the native resolution.  There's no way around that.  The only question is how much worse.

    If you could render half of the samples and then use DLSS without any loss of image quality, then why stop there?  Why not render a quarter of the samples, then use DLSS to bring that up to half of the samples, then do it again to get the full image?  Only needing to render a quarter of the samples will run faster than half.

    And why stop at two iterations of DLSS, if there's no loss of image quality?  Why not make it three?  Or four?  Or twenty?  Why not render just a single pixel per frame and use DLSS magic to recreate the perfect final image?  The only thing that could possibly stop that from working great is if there's some loss of image quality when you have fewer samples and try to interpolate the rest.

    The constant comparisons to TAA should be quite a red flag.  TAA was a dumb gimmick that never caught on because it was fundamentally a bad idea.  Saying "look how much better DLSS is than TAA" isn't saying "look how good DLSS is".  It's saying "look how awful TAA is".

    If you don't have enough rendering power to run the game at the resolution you want, could DLSS be better than other methods of running the game at a lower resolution and upsampling?  Maybe it could.  Though I don't think that there are very many people excited about paying $1200 for a video card so that they can run at reduced graphical settings.

    DLSS 2x will probably be better than FXAA if you run the game at exactly the graphical settings that Nvidia used for training.  And if you want to do anything else, it could easily be worse than FXAA, as it won't have the ability to adapt to whatever settings you like.

    But DLSS isn't going to be 64x SSAA level of image quality.  I could pretty much guarantee that it will be inferior in image quality to 4x SSAA, and it will probably also be worse than 2x SSAA.  Nvidia may try to claim that DLSS 2x looks pretty close to 64x SSAA.  But so does 4x SSAA.

    Most likely, DLSS 2x will be somewhere between FXAA and 2x SSAA in both performance hit and image quality.  The question is where in between them it is.  If the image quality is closer to SSAA and the performance hit closer to FXAA, then it might be useful.  If it's the other way around, it's junk.  That's what we don't know right now.

    In order to use DLSS, games will have to specify, here's the 3D rendered part of the image without the 2D stuff.  Otherwise, they'll run into the problem that AMD did with trying to do MLAA purely through drivers:  it doesn't know what to anti-alias and what not to.  It ended up anti-aliasing text, which made it needlessly blurry and looked terrible.

    But games have to specify that to use FXAA, too.  So Nvidia might be able to tell game developers, if you're going to implement FXAA properly (or any other sort of post-processing anti-aliasing, for that matter), then here's how you can add DLSS as an option almost for free.  That could plausibly get decently widespread support.  If I'm a game developer and I think DLSS looks kind of dumb, but implementing it and giving my players that option is free, I'm going to give my players that option.

    I can tell you right now where the huge flaw in DLSS will be:  it will mangle small details, or even drop them entirely.  If you have a large object that is pretty close to solid color and you'd have 1000 samples of that object at the native resolution or 500 with DLSS, it will probably look pretty good with DLSS.  If you have one sample of some detail at the native resolution and you sometimes have 0 with DLSS, that detail is gone and no amount of machine learning magic is going to recreate it.
    Ridelynn
  • ScotScot Member LegendaryPosts: 22,986
    Vrika said:
    Scot said:
    What does this have to do with Turing, apart from the fact the name will grab news coverage?
    If you mean the scientist, NVidia is in habit of naming their GPUs based on scientists. Maxwell, Pascal, Volta, etc.

    I think the GPUs have about as much to do with different scientists as Android versions have to do with different foods.
    When they run out of well known names I eagerly await Son of Pascal, Son of Turin and so on. :D
  • QuizzicalQuizzical Member LegendaryPosts: 25,355
    Here's a prediction for you:  Nvidia is already trying to convince reviewers to compare frame rates of Turing cards with DLSS to other GPUs running at the native resolution, as if they were comparable.  Any review sites that bite on that are disreputable and should be dismissed as useless--not just for this review, but discredited longer term.  Not sure if Nvidia will get any takers there, but I'd bet on them trying.
    [Deleted User]
  • RidelynnRidelynn Member EpicPosts: 7,383
    Well if you believe HardOCP, the new nVidia NDA may tie reviewers hands who get early samples.
    [Deleted User]Ozmodan
  • OzmodanOzmodan Member EpicPosts: 9,726
    On a side note, I just got a MSI 1080 ti new for $384.  Who needs the 20xx at these prices since I have no interest in running at 4k for the foreseeable future and I think that goes for most gamers..  
    Ridelynn
  • MMOman101MMOman101 Member UncommonPosts: 1,786
    Ozmodan said:
    On a side note, I just got a MSI 1080 ti new for $384.  Who needs the 20xx at these prices since I have no interest in running at 4k for the foreseeable future and I think that goes for most gamers..  
    where?

    “It's unwise to pay too much, but it's worse to pay too little. When you pay too much, you lose a little money - that's all. When you pay too little, you sometimes lose everything, because the thing you bought was incapable of doing the thing it was bought to do. The common law of business balance prohibits paying a little and getting a lot - it can't be done. If you deal with the lowest bidder, it is well to add something for the risk you run, and if you do that you will have enough to pay for something better.”

    --John Ruskin







  • QuizzicalQuizzical Member LegendaryPosts: 25,355
    Ridelynn said:
    Well if you believe HardOCP, the new nVidia NDA may tie reviewers hands who get early samples.
    If the NDA requires reviewers to review products in a particular way, saying you have to make this comparison and can't make that one, then that's not a non-disclosure agreement.  That's an advertisement for a product, not a review.  If that's what's going to happen, then we'll have to wait much longer for independent reviews.

    I don't think that's what's going to happen.  HardOCP claimed that the NDA had a duration of five years.  Five years isn't necessarily unreasonable if reviewers are given information about products that will launch two or three years from now.  But information about the product to be reviewed should generally have the NDA end when it's legitimate to post the review.
    [Deleted User]
  • gervaise1gervaise1 Member EpicPosts: 6,919
    edited September 2018
    Ozmodan said:
    On a side note, I just got a MSI 1080 ti new for $384.  Who needs the 20xx at these prices since I have no interest in running at 4k for the foreseeable future and I think that goes for most gamers..  
    For me this encapsulates the two key issues.

    What "useful" extra performance will they deliver. A card might deliver 200fps but so what if you already get 100fps. Better at 4k but so what if you only play at HD.

    At what cost does any useful extra performance come. 

    I am sure NVidia understand this. Hence the push to get people to believe they "need" something - i.e. ray tracing - that so far they have managed perfectly well without.

    Now if they could get everyone onto large 4k monitors .... then more of us would be on consoles and AMD would be making more money!

    Post edited by gervaise1 on
    [Deleted User]Ozmodan
  • RidelynnRidelynn Member EpicPosts: 7,383
    edited September 2018
    Key points of the NDA:

    Reviewers were supposedly only given 2 days to sign

    5-year expiration on "confidential information".

    Any/all "confidential information" must only be used solely for the benefit of nVidia.

    It does not necessarily define what is confidential though.

    You can interpret that as you will. The general consensus over at HardOCP (Kyle and his legal team) basically said if you were just a review site that wanted to get clicks on review day, it probably wouldn't impact you much. But if you discovered a flaw (like the 3.5G 970 issue), you may be restricted from talking about that or publishing tests/benchmarks that may show the effects of such until 5 years after the NDA was executed. Also troubling was what exactly was considered confidential - it had a very broad definition.

    Also interesting, this link is from the same source that leaked the nVidia slide deck showing RTX performance a few days ago:

    https://videocardz.com/76645/nvidias-new-non-disclosure-agreement-leaked
    [Deleted User]
  • QuizzicalQuizzical Member LegendaryPosts: 25,355
    As I read it, they're basically saying that, if Nvidia tells you something, you're not allowed to tell anyone else unless it's something that they've announced publicly.  It looks to me like it's targeted at leaking information ahead of public announcements.

    I don't see any enforcement mechanism there, either.  If you decide that saying something isn't a violation of the NDA and say it publicly, and Nvidia claims it's an NDA violation, what can they do about it?  Not give you access to the next one?  They'd already do that, even without the NDA.

    I could see how you could read it as meaning, if Nvidia tells reviewers about a flaw in the hardware, that flaw is then proprietary and reviews aren't allowed to mention it.  And that would certainly be a problem.  But someone would talk about it anyway as part of their review, and if Nvidia tried to make a fuss about it saying it was an NDA violation, that would be a PR disaster for them.  If Nvidia really is trying to control reviews like that, I expect it to backfire on them one way or another.
  • RidelynnRidelynn Member EpicPosts: 7,383
    And the first reviews are in:

    2080 = 1080Ti + $100
    2080Ti is ~ +30%
  • RidelynnRidelynn Member EpicPosts: 7,383
    NDA is a legal enforceable document - it doesn’t necessarily need to specify damages; those are whatever the lawyers can convince a judge of in court. 

    The legal costs alone are deterrent enough, let alone whatever award a judge may hand out as recompense for “damages” based on its violation.

    Also, it matters pretty little what I or any of us think about how the NDA is interpreted, all that matters is how nVidia interprets it. If they decide that you breached it and bring it to court, unless it’s very egregiously not a violation, you are going to have to lawyer up and make your case. 
    [Deleted User]
  • QuizzicalQuizzical Member LegendaryPosts: 25,355
    Torval said:
    Strangely enough Nvidia's RT implementation relies on Microsoft's DX12 RT which doesn't actually hit Windows 10 until the Fall Update in October. https://www.pcworld.com/article/3305331/components-graphics/nvidias-geforce-rtx-graphics-no-ray-tracing.amp.html

    I wonder why they didn't just time the release for the day of or after the software update.
    I have two explanations, which are not mutually exclusive:

    1)  They know that, as implemented in games, partial ray tracing and DLSS aren't going to be very impressive.  Better to have people buy on the basis of marketing hype and filling in the details with what they hope for than to be scared off by the reality.

    2)  Once 7 nm GPUs show up, 12 nm Turing is going to be, if not obsolete, then at least on the way out.  If Nvidia waits for games with partial ray tracing and DLSS to be available rather than launching now, that might mean that Turing is unavailable for a large fraction of what could have been its product lifecycle.  That's not a way to make a profit.

    To put this into perspective, the most recent GPUs to be interesting for very long in spite of being a full process node behind alternatives are the various 55 nm cards over the course of 2009.  That happened only because TSMC's 40 nm process node was so troubled initially that it took several months between the launch of the Radeon HD 4770 that was largely a test part and the "real" 40 nm cards that people wanted.  That's the only example I could find of being worth buying in spite of being a full node behind going all the way back to 2006.
    [Deleted User]
  • OzmodanOzmodan Member EpicPosts: 9,726
    Ridelynn said:
    And the first reviews are in:

    2080 = 1080Ti + $100
    2080Ti is ~ +30%
    More like 2080 = 1080ti +$200 or more for very little difference according to the benchmarks.
  • QuizzicalQuizzical Member LegendaryPosts: 25,355
    It looks like a handful of games that were previously strongly pro-AMD outliers have the RTX 2080 perform far better than the GTX 1080 Ti.  My guess is that that's the handful of games that on the latter, were choking for lack of register or local memory capacity or some such, which Turing finally brought up to parity with what AMD has been offering since 2012.

    If you ignore those games, it's basically tied with a GTX 1080 Ti, slightly winning in some games and slightly losing in others.
    [Deleted User]
  • CleffyCleffy Member RarePosts: 6,412
    So... just an interim card.
    RidelynnOzmodan[Deleted User]
  • QuizzicalQuizzical Member LegendaryPosts: 25,355
    If DLSS or DLSS 2x were all that good, it would be easy for Nvidia to demonstrate as such.  Even if no games support it yet, they could make a demo that does.  At least, assuming that they're something real and not just something that Nvidia hopes that they could implement someday.  They have some performance claimed benchmarks, so presumably it's implemented somewhere.

    All that Nvidia needs to do is release screenshots showing exactly the same image rendered in several ways:

    1)  no anti-aliasing
    2)  DLSS
    3)  rendered at the same base resolution as DLSS and upscaled in a traditional manner
    4)  FXAA
    5)  DLSS 2x
    6)  some very high degree of SSAA

    Critically, they must be released in some format that is either uncompressed or lossless compression.  Otherwise, compression artifacts can swamp the differences between different types of anti-aliasing.  Ideally, you release several sets of images, from different games or wildly different scenes or whatever.  And then you can see how it compares to the alternatives, at least in a cherry-picked situation probably chosen to be highly favorable to it.

    Nvidia hasn't done this.  Why not?  Reviewers can't necessarily replicate exactly the same game image in various ways, even if given a lot of time without the deadline of an embargo ending.  But if you've got control of the guts of a game engine, this is easy to do.  All you have to do is to capture all of the draw calls for a given frame along with their associated uniforms and textures and run them once with each combination of settings enabled.  Maybe it's not something you can do in ten minutes, but the cost would be a rounding error as compared to their marketing budget for the launch.  For that matter, it doesn't necessarily even need to be a full game, but just one fixed scene.

    If (2) looked markedly better than (3) and was basically indistinguishable from (1), Nvidia surely would have done this.  Similarly if (5) looks markedly better than (4) and basically indistinguishable from (6).  But if (2) doesn't look meaningfully better than (3) and (5) doesn't look meaningfully better than (4), then DLSS is worthless, so Nvidia wouldn't provide examples because they wouldn't want you to find out.  Nvidia is behaving as though it's the latter situation and not the former, even though they could have done otherwise if so inclined.

    Instead, they mainly want to compare DLSS to TAA in situations where TAA looks worse than no anti-aliasing at all.  That's about as much of a red flag as if the only performance comparisons they were willing to make of the RTX 2080 Ti was to say, look how much faster it is than a GeForce GT 740.

    Thus, it's probable that DLSS is just a bunch of marketing hype that will never amount to anything.
    OzmodanRidelynn
  • QuizzicalQuizzical Member LegendaryPosts: 25,355
    Actually, to make that point in a shorter way:  the embargo is over, the reviews are out, the cards are for sale, and Nvidia still hasn't shown us just how good DLSS looks.  As I see it, there are three logical possibilities:

    1)  Nvidia can't show us because it doesn't work yet, so they don't know if it will be any good.
    2)  It works and Nvidia could show us, but it's useless and they don't want us to know.
    3)  It works, is really awesome, and Nvidia could show us, but they don't want us to know.

    I know which of those three I'd immediately rule out as something that Nvidia (or basically any other successful company ever) would never do.
    [Deleted User]Ridelynnlaxie
Sign In or Register to comment.