Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!

AMD announces Navi 21 GPU to compete with Nvidia at the high end

QuizzicalQuizzical Member LegendaryPosts: 25,355
The official lineup is:

Radeon RX 6900 XT, 80 CU, $1000, 300 W, December 8
Radeon RX 6800 XT, 72 CU, $650, 300 W, November 18
Radeon RX 6800, 60 CU, $580, 250 W, November 18

AMD showed off benchmarks of the 6900 XT mostly being a little faster than the RTX 3090, the 6800 XT mostly being a little faster than the RTX 3080, and the RX 6800 handily beating the RTX 2080 Ti.  The big question is how typical those benchmarks are.  Considering AMD's history of mild cherry-picking in their benchmarks, I'm guessing that they've priced their cards in line with their performance as compared to the RTX 3090, 3080, and 3070.

All three of the new GPUs will have 16 GB of memory.  That's a lot more than the RTX 3080 or 3070 have.  They also have a 128 MB "infinity cache", which I'm guessing is just a marketing name for their L2 cache.  I've explained how that could help with graphics in the past.  AMD also claims that it helps a lot with caching ray-tracing data.  When going with a 256-bit memory bus, AMD's cards will have a lot less memory bandwidth than the RTX 3080 or 3090, but the large cache might roughly make up for that in graphics.

The new cards do support DirectX 12 Ultimate, and that includes ray-tracing.  That's largely expected, but still good to have it confirmed.

AMD is also pushing "smart access memory".  Basically, if you're using a Ryzen 5000 series CPU and a Radeon RX 6000 series GPU, then the CPU has full access to the GPU's memory.  Of course, CPUs already have access to tell a GPU to put stuff into memory, process it, and pull the results back to the CPU.  There is a little bit of memory that is reserved for the GPU's internal use, but giving the CPU explicit access to that seems like a dumb idea to me.  I'm pessimistic that the "smart access memory" will ever do anything useful; to me, it sure looks like a marketing gimmick.  Or as I often say, any product or concept that feels the need to call itself "smart", isn't.
[Deleted User]NanfoodleGladDog

Comments

  • CleffyCleffy Member RarePosts: 6,412
    I think benchmarks are necessary here. On paper the RTX 3090 should be faster than the RX 6900 XT. I would also be interested in how these cards perform in Blender.
    GladDogAbscissa15
  • LobotomistLobotomist Member EpicPosts: 5,965
    Thats completely stupid. They are competing for "enthusiast" graphic card spots. While PS5 is cheaper than a single graphic card. Not whole PC, just one component inside it.

    They may win the battle for the top spot, but will lose the whole battleground.

    If they continue with these there will be no more PC gaming because average income people will just buy consoles.


    Its easy cramming top high price component into some monster piece of hardware that is quickly growing to be bigger than whole PC and takes same power as toaster owen.

    Its hard to make good card that is made so good that its affordable, and effective.

    Who ever makes that will be the true winner



  • QuizzicalQuizzical Member LegendaryPosts: 25,355
    They announce that each new generation... and we all know how it ends.
    When they have a big die, it usually ends with them being competitive with Nvidia at the high end.  The Fury X was about as fast as the GTX 980 Ti.  The R9 290X was about as fast as the GTX 780 Ti.  The 7970 was about as fast as the GTX 680.  The 6970 wasn't that much slower than the GTX 580, in spite of being a lot smaller.

    The Radeon RX Vega 64 was sure a miss, as it wasn't competitive with the GTX 1080 Ti.  But that's really the only generation AMD has had in the last 13 years that wasn't competitive where it was supposed to be.  The RX 5700 XT and RX 480 weren't competitive at the high end, but small dies like that aren't supposed to be.  That's like claiming that Nvidia is a failure because the GTX 1660 isn't nearly as fast as the Radeon RX 5700 XT.  A small die isn't supposed to be a high end card.
    [Deleted User]
  • QuizzicalQuizzical Member LegendaryPosts: 25,355
    Thats completely stupid. They are competing for "enthusiast" graphic card spots. While PS5 is cheaper than a single graphic card. Not whole PC, just one component inside it.

    They may win the battle for the top spot, but will lose the whole battleground.

    If they continue with these there will be no more PC gaming because average income people will just buy consoles.


    Its easy cramming top high price component into some monster piece of hardware that is quickly growing to be bigger than whole PC and takes same power as toaster owen.

    Its hard to make good card that is made so good that its affordable, and effective.

    Who ever makes that will be the true winner
    There will be plenty of cheaper, lower end cards, just as there are every generation.  If you want a $100 or $200 or $300 video card, AMD and Nvidia have plenty of options for you.  Also building a $650 video card doesn't prohibit anyone who wants a cheaper card from buying a cheaper card.
  • kitaradkitarad Member LegendaryPosts: 7,919
    Good! All competition is good for the consumer.
    [Deleted User]GladDogAsm0deus

  • NanfoodleNanfoodle Member LegendaryPosts: 10,617
    I love me some Price Wars. 


    Asm0deus[Deleted User]
  • CleffyCleffy Member RarePosts: 6,412
    Benchmarks may show AMD performing a lot better now compared to the previous generation simply due to less games running on DX11. However, the influence nVidia has over developers is a point of concern that AMD has never addressed.
  • RidelynnRidelynn Member EpicPosts: 7,383
    remsleep said:
    Cleffy said:
    Benchmarks may show AMD performing a lot better now compared to the previous generation simply due to less games running on DX11. However, the influence nVidia has over developers is a point of concern that AMD has never addressed.


    Also the rumors are that AMD's hardware ray tracing is much slower than Nvidia's RTX - will be nice to see the actual benchmarks.
    Maybe.

    Need benchmarks, as you say, to make sure we are seeing apples to apples.
  • QuizzicalQuizzical Member LegendaryPosts: 25,355
    remsleep said:
    Cleffy said:
    Benchmarks may show AMD performing a lot better now compared to the previous generation simply due to less games running on DX11. However, the influence nVidia has over developers is a point of concern that AMD has never addressed.


    Also the rumors are that AMD's hardware ray tracing is much slower than Nvidia's RTX - will be nice to see the actual benchmarks.
    I've seen a claim that Ampere is much faster than Navi 2X in some particular synthetic benchmark that uses something from ray tracing.  But depending on what that benchmark is measuring, it may or may not matter to real games.

    Remember how Fermi was something like 12x as fast as Evergreen in tessellation?  That sure didn't matter in actual gameplay, even in games that used tessellation.  Well, except for that one random, Nvidia-sponsored game that had invisible water under the level and massively tessellated it and then discarded it.

    It's not unusual for synthetic benchmarks to show really lopsided results.  For example, overall, a GeForce RTX 2080 Ti is a lot faster than a Radeon RX Vega 64 in nearly all games.  But if you do a synthetic benchmark of local memory bandwidth, the Vega 64 will win by a wide margin and be something like 60% faster than the RTX 2080 Ti.  If you do a synthetic benchmark of L2 cache bandwidth, the RTX 2080 Ti will probably be several times as fast as the Vega 64.  Those are both likely to be more relevant to actual game performance than the synthetic benchmark that you're citing.  And that's without wandering way off into the weeds to benchmark performance in an instruction that one architecture has and another doesn't.
  • Abscissa15Abscissa15 Member UncommonPosts: 69
    Cleffy said:
    I think benchmarks are necessary here. On paper the RTX 3090 should be faster than the RX 6900 XT. I would also be interested in how these cards perform in Blender.

    So, in order to demonstrate effects of smart access memory, either (1) pair a Ryzen 5000 with a 6900XT, then a 3090 and compare test results, or (2) pair a 6900XT with a Ryzen 5000, then a i9-10900 ? Or both ?
  • VrikaVrika Member LegendaryPosts: 7,888
    Cleffy said:
    I think benchmarks are necessary here. On paper the RTX 3090 should be faster than the RX 6900 XT. I would also be interested in how these cards perform in Blender.

    So, in order to demonstrate effects of smart access memory, either (1) pair a Ryzen 5000 with a 6900XT, then a 3090 and compare test results, or (2) pair a 6900XT with a Ryzen 5000, then a i9-10900 ? Or both ?
    I think the review websites will find something to test with as soon as the review embargo gets lifted.
     
  • VrikaVrika Member LegendaryPosts: 7,888
    Thats completely stupid. They are competing for "enthusiast" graphic card spots. While PS5 is cheaper than a single graphic card. Not whole PC, just one component inside it.

    They may win the battle for the top spot, but will lose the whole battleground.

    If they continue with these there will be no more PC gaming because average income people will just buy consoles.


    Its easy cramming top high price component into some monster piece of hardware that is quickly growing to be bigger than whole PC and takes same power as toaster owen.

    Its hard to make good card that is made so good that its affordable, and effective.

    Who ever makes that will be the true winner
    Let me know when a the PS5 came create and render 10,000 + particles in After Effects.
    As soon as rendering stuff in After Effects becomes mainstream entertainment?
    [Deleted User]
     
Sign In or Register to comment.