Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!

GPU vendors decide that ray-tracing is their next excuse to make you need a faster video card

2»

Comments

  • jusomdudejusomdude Member RarePosts: 2,706
    @Vrika And if you bother to put some effort into your googling skills you'd see that the noise/warping is solved in their other videos and look much better. But I guess that would kill your argument.

  • VrikaVrika Member LegendaryPosts: 7,888
    jusomdude said:
    @Vrika And if you bother to put some effort into your googling skills you'd see that the noise/warping is solved in their other videos and look much better. But I guess that would kill your argument.

    Please post a link to that video since it's so easy for you to find.
     
  • MendelMendel Member LegendaryPosts: 5,609
    Quizzical said:
    Sovrath said:
    Quizzical said:
    They've been pushing higher resolutions, higher frame rates, and VR, but that only goes so far before it gets kind of ridiculous.


    Great post up to there.

    That's very much opinion. Where should they stop? I mean, I'm sure there were people who thought things were fine years ago. Suddenly new breakthroughs and we get absolutely beautiful images/vistas/characters.

    While I'm very much capable of enjoying a game that has dated graphics or is only "so good" I'm all for them pushing the bounds of technology to bring me breathtaking worlds.

    I say "bring it".
    I'm not saying that resolutions and frame rates are high enough today that nothing more will ever matter.  I'm saying that there are sharply diminishing returns at some point, and eventually, it doesn't really matter anymore.  For example, most people would agree that 60 frames per second is better than 30.  I don't think it's that hard to make a case that 120 is better than 60.  Maybe you could argue that 240 is a little better than 120.  Is 480 frames per second really better than 240?  Even if you say it is, the real-world difference between 240 and 480 frames per second is massively smaller than the difference between 30 and 40.

    There used to be a thriving, competitive market for sound cards.  Then they got good enough, and then integrated sound chips got plenty good enough for most people.  Now hardly anyone buys a discrete sound card anymore.  The GPU vendors really, really want for that to not happen to GPUs.
    There's also the human factors to consider when dealing with resolutions and frame rates.  The eye and brain can only process a certain amount of data.  Hardware going beyond the limits of the human body is never going to be productive.

    I like the sound card example.  GPUs might be close to that point already.  That probably scares the GPU vendors.  Impending obsolescence usually scares people, especially when the thing becoming obsolete is how you make your living.



    Logic, my dear, merely enables one to be wrong with great authority.

  • QuizzicalQuizzical Member LegendaryPosts: 25,355
    That kind of proves the point on ray-tracing being so expensive.  It took 4 high-end GPUs to get 24 frames per second.  Ray-tracing doesn't have the obvious impediments to multi-GPU scaling that rasterization does, so it likely scales well to multiple GPUs, in which case, we're talking about 6 frames per second on one GPU.

    And to get those 6 frames per second, they had to render a small, enclosed room so that there weren't very many things to draw.  It likely had everything rigged to fit in the GPU's L2 cache.  There's a lot of rigging to reduce processing load that you can do for a fixed cinematic clip that you just can't do for a live game that lets the player go anywhere.  Make it a larger, outdoor scene with 50 times as many models and so many of your memory accesses go off the chip that maybe you only get 1 frame per second.

    The monitor resolution and level of anti-aliasing are also left unstated, which likely means a low resolution and no anti-aliasing.  Double the number of rays to cast (either by doubling the number of pixels or the anti-aliasing rate) and you double the workload.  That can very quickly get you measuring the number of seconds per frame rather than the other way around.
  • QuizzicalQuizzical Member LegendaryPosts: 25,355
    Quizzical said:
    That kind of proves the point on ray-tracing being so expensive.  It took 4 high-end GPUs to get 24 frames per second.  Ray-tracing doesn't have the obvious impediments to multi-GPU scaling that rasterization does, so it likely scales well to multiple GPUs, in which case, we're talking about 6 frames per second on one GPU.

    And to get those 6 frames per second, they had to render a small, enclosed room so that there weren't very many things to draw.  It likely had everything rigged to fit in the GPU's L2 cache.  There's a lot of rigging to reduce processing load that you can do for a fixed cinematic clip that you just can't do for a live game that lets the player go anywhere.  Make it a larger, outdoor scene with 50 times as many models and so many of your memory accesses go off the chip that maybe you only get 1 frame per second.

    The monitor resolution and level of anti-aliasing are also left unstated, which likely means a low resolution and no anti-aliasing.  Double the number of rays to cast (either by doubling the number of pixels or the anti-aliasing rate) and you double the workload.  That can very quickly get you measuring the number of seconds per frame rather than the other way around.
    Didn't that video looking pretty cool, though?  I would love to see Ray Tracing come to desktop GPUs if it's going to deliver scenes like that!
    Absolutely, it looked cool.  But if it's going to mean playing games at 24 frames per second, a 640x480 resolution, and no anti-aliasing, I think that would probably look worse than what we can do with rasterization now.

    Movies have been using ray-tracing for years now.  But they don't have to render frames in real time.  If it takes a computer an hour per frame, that's completely acceptable on a big-budget movie.  If the computations can be done, then given massively more processing power, you could do them in real-time for gaming purposes.

    The problem is that going from an hour per frame to 60 frames per second is a performance difference of 216000.  That's about 36 years worth of Moore's Law improvements, and Moore's Law is pretty much guaranteed to be thoroughly dead long before then.  Moore's Law is on life support already, and to still be going steady 20 years from now would require placing individual atoms without quantum mechanics getting in the way.
  • QuizzicalQuizzical Member LegendaryPosts: 25,355
    Quizzical said:
    Quizzical said:
    That kind of proves the point on ray-tracing being so expensive.  It took 4 high-end GPUs to get 24 frames per second.  Ray-tracing doesn't have the obvious impediments to multi-GPU scaling that rasterization does, so it likely scales well to multiple GPUs, in which case, we're talking about 6 frames per second on one GPU.

    And to get those 6 frames per second, they had to render a small, enclosed room so that there weren't very many things to draw.  It likely had everything rigged to fit in the GPU's L2 cache.  There's a lot of rigging to reduce processing load that you can do for a fixed cinematic clip that you just can't do for a live game that lets the player go anywhere.  Make it a larger, outdoor scene with 50 times as many models and so many of your memory accesses go off the chip that maybe you only get 1 frame per second.

    The monitor resolution and level of anti-aliasing are also left unstated, which likely means a low resolution and no anti-aliasing.  Double the number of rays to cast (either by doubling the number of pixels or the anti-aliasing rate) and you double the workload.  That can very quickly get you measuring the number of seconds per frame rather than the other way around.
    Didn't that video looking pretty cool, though?  I would love to see Ray Tracing come to desktop GPUs if it's going to deliver scenes like that!
    Absolutely, it looked cool.  But if it's going to mean playing games at 24 frames per second, a 640x480 resolution, and no anti-aliasing, I think that would probably look worse than what we can do with rasterization now.

    Movies have been using ray-tracing for years now.  But they don't have to render frames in real time.  If it takes a computer an hour per frame, that's completely acceptable on a big-budget movie.  If the computations can be done, then given massively more processing power, you could do them in real-time for gaming purposes.

    The problem is that going from an hour per frame to 60 frames per second is a performance difference of 216000.  That's about 36 years worth of Moore's Law improvements, and Moore's Law is pretty much guaranteed to be thoroughly dead long before then.  Moore's Law is on life support already, and to still be going steady 20 years from now would require placing individual atoms without quantum mechanics getting in the way.
    Damn ... I was hoping to see this as early as 10 year from now.  Well, hopefully I'm not too old to properly enjoy gaming by the time it arrives.
    A lot depends on how many restrictions you're willing to put on it.  For a 1 on 1 fighting game akin to Street Fighter or Mortal Kombat, with only two characters in the entire scene and nothing else that moves, they might be able to go full ray-tracing if so inclined within a few years.  For an MMORPG that can have dozens of players or mobs and all sorts of terrain that you can see from far away, I don't expect that ray-tracing will ever be a sensible approach.
  • CleffyCleffy Member RarePosts: 6,412
    edited March 2018
    I really just don't see how ray tracing would be useful for video games right now. I know AMD has demoed this since 2006 and no one pursued it. When we are talking about what a GPU can ray trace now in real-time, it doesn't even account for the big benefits of ray tracing for realism. No refraction, no subsurface scattering. Why even bother when you can fake these in rasterization good enough now.
  • MadFrenchieMadFrenchie Member LegendaryPosts: 8,505
    Quizzical said:
    Quizzical said:
    Quizzical said:
    That kind of proves the point on ray-tracing being so expensive.  It took 4 high-end GPUs to get 24 frames per second.  Ray-tracing doesn't have the obvious impediments to multi-GPU scaling that rasterization does, so it likely scales well to multiple GPUs, in which case, we're talking about 6 frames per second on one GPU.

    And to get those 6 frames per second, they had to render a small, enclosed room so that there weren't very many things to draw.  It likely had everything rigged to fit in the GPU's L2 cache.  There's a lot of rigging to reduce processing load that you can do for a fixed cinematic clip that you just can't do for a live game that lets the player go anywhere.  Make it a larger, outdoor scene with 50 times as many models and so many of your memory accesses go off the chip that maybe you only get 1 frame per second.

    The monitor resolution and level of anti-aliasing are also left unstated, which likely means a low resolution and no anti-aliasing.  Double the number of rays to cast (either by doubling the number of pixels or the anti-aliasing rate) and you double the workload.  That can very quickly get you measuring the number of seconds per frame rather than the other way around.
    Didn't that video looking pretty cool, though?  I would love to see Ray Tracing come to desktop GPUs if it's going to deliver scenes like that!
    Absolutely, it looked cool.  But if it's going to mean playing games at 24 frames per second, a 640x480 resolution, and no anti-aliasing, I think that would probably look worse than what we can do with rasterization now.

    Movies have been using ray-tracing for years now.  But they don't have to render frames in real time.  If it takes a computer an hour per frame, that's completely acceptable on a big-budget movie.  If the computations can be done, then given massively more processing power, you could do them in real-time for gaming purposes.

    The problem is that going from an hour per frame to 60 frames per second is a performance difference of 216000.  That's about 36 years worth of Moore's Law improvements, and Moore's Law is pretty much guaranteed to be thoroughly dead long before then.  Moore's Law is on life support already, and to still be going steady 20 years from now would require placing individual atoms without quantum mechanics getting in the way.
    Damn ... I was hoping to see this as early as 10 year from now.  Well, hopefully I'm not too old to properly enjoy gaming by the time it arrives.
    A lot depends on how many restrictions you're willing to put on it.  For a 1 on 1 fighting game akin to Street Fighter or Mortal Kombat, with only two characters in the entire scene and nothing else that moves, they might be able to go full ray-tracing if so inclined within a few years.  For an MMORPG that can have dozens of players or mobs and all sorts of terrain that you can see from far away, I don't expect that ray-tracing will ever be a sensible approach.
    Yea, but what if aliens, man???  They'll know how!
    [Deleted User][Deleted User]

    image
  • QuizzicalQuizzical Member LegendaryPosts: 25,355
    Quizzical said:
    A lot depends on how many restrictions you're willing to put on it.  For a 1 on 1 fighting game akin to Street Fighter or Mortal Kombat, with only two characters in the entire scene and nothing else that moves, they might be able to go full ray-tracing if so inclined within a few years.  For an MMORPG that can have dozens of players or mobs and all sorts of terrain that you can see from far away, I don't expect that ray-tracing will ever be a sensible approach.
    Yea, but what if aliens, man???  They'll know how!
    But what if they know how to do massively better stuff with rasterization than we can, so ray-tracing still doesn't make sense for them?
  • MadFrenchieMadFrenchie Member LegendaryPosts: 8,505
    Quizzical said:
    Quizzical said:
    A lot depends on how many restrictions you're willing to put on it.  For a 1 on 1 fighting game akin to Street Fighter or Mortal Kombat, with only two characters in the entire scene and nothing else that moves, they might be able to go full ray-tracing if so inclined within a few years.  For an MMORPG that can have dozens of players or mobs and all sorts of terrain that you can see from far away, I don't expect that ray-tracing will ever be a sensible approach.
    Yea, but what if aliens, man???  They'll know how!
    But what if they know how to do massively better stuff with rasterization than we can, so ray-tracing still doesn't make sense for them?
    Holy shit, dude.  Where's that exploding head gif when you need it???

    image
  • VrikaVrika Member LegendaryPosts: 7,888
    Quizzical said:
    Vrika said:
    There's some speculation that the Tensor Cores in NVidia's Volta could be used to enhance ray-tracing speed
      https://www.pcgamer.com/nvidia-talks-ray-tracing-and-volta-hardware/

    If it turns out to be true, then NVidia might be about to launch a line of very expensive ray-tracing enabled GPUs, and cheaper product line of GPU's not meant for ray-tracing.

    This post is only wild guesswork, but with all the money NVidia is investing in their AI/deep learning hardware, I bet they'd love any excuse to sell it also to high-end gamers.
    I would regard that as extremely unlikely.  In my original post, I said that the performance problem with ray-tracing is that it breaks the SIMD and memory coalescence optimizations that work so well with rasterization.  Using the tensor cores isn't just SIMD, but a very, very restricted version of SIMD that hardly anything can use.

    GPUs are pretty heavily optimized for floating-point FMA operations, where fma(a, b, c) = a * b + c, as a single instruction, with all of variables floats.  The tensor cores can basically do that same operation, except that a, b, and c are half-precision 4x4 matrices, and with 1/8 of the throughput of doing the same thing with floats.  That's a huge win if you need massive amounts of it, as doing the matrix multiply-add naively would be 64 instructions.  Being able to do that with 1/8 of the throughput of one instruction is an 8x speed improvement.

    The problem is that basically nothing fits that setup.  Pretty much nothing in graphics does.  Pretty much nothing in non-graphical compute does.  Nvidia thinks that machine learning will, which strikes me as plausible, though I haven't had a look at the fine details.  But I'd think of the tensor cores as being special purposes silicon to do one dedicated thing (i.e., machine learning), kind of like the video decode block or tessellation units in a GPU, or the AES-NI instructions in a CPU.

    What's far more likely is that Nvidia is getting considerable mileage out of their beefed-up L2 cache in Maxwell and later GPUs.  So long as the scene is simple enough that most memory accesses can go to L2 cache rather than off-chip DRAM, the memory bandwidth problems wouldn't be as bad.
    Apparently the Tensor Cores wouldn't do ray tracing itself, but they could be used to de-noise the result and give better quality with fewer traced rays.
      https://www.pcper.com/news/Graphics-Cards/NVIDIA-RTX-Technology-Accelerates-Ray-Tracing-Microsoft-DirectX-Raytracing-API
     
Sign In or Register to comment.