Quantcast

Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!

Newegg Release RX6kXT Specs

Mars_OMGMars_OMG Member EpicPosts: 2,847
edited September 27 in Hardware
https://www.newegg.com/insider/how-to-choose-graphics-card/

now edited 


Radeon RX 6700 XT
Streaming Procs : 2560
base clock : 1500
GDDR6  : 6gig
ram band-width : 384
192-bit
TDP : 150


Radeon RX 6800 XT
Streaming Proc : 3840
base clock : 1500
GDDR6 : 12gig 
ram band-width 384
192-bit
TDP  :200

Radeon RX 6900 XT
streaming Procs: 5120
base clock : 1500
GDDR6 : 16 : 
ram band-width : 512
256-bit
TDP : 300


if this is true , combining those specs with RDN2 , will be amazing achievement in micro processing.
- abandoning social media could possibly save the world.  

Torval

Comments

  • CleffyCleffy Member RarePosts: 6,274
    I'm surprised they went with GDDR6 instead of HBM2 on the higher end chips.
  • TorvalTorval Member LegendaryPosts: 20,297
    Cleffy said:
    I'm surprised they went with GDDR6 instead of HBM2 on the higher end chips.

    HBM2 is still way too expensive to both manufacture the chips and produce the cards.
    blueturtle13
    Fedora - A modern, free, and open source Operating System. https://getfedora.org/

    traveller, interloper, anomaly, iteration


  • TorvalTorval Member LegendaryPosts: 20,297
    Very cool. TechPowerUp has a similar and interesting take on those too. I'm still going to wait for benchmarks and comparisons and possibly the respins that come next year. It really depends on whether I can wait to build my next system or have to do it earlier.


    Product NameGPU ChipReleasedBusMemoryGPU clockMemory clockShaders / TMUs / ROPs
    Radeon RX 6500 XT Navi 23 Oct 28th, 2020 PCIe 4.0 x8 10 GB, GDDR6, 160 bit 1489 MHz 2000 MHz 2048 / 128 / 64
    Radeon RX 6700 XT Navi 22 Oct 28th, 2020 PCIe 4.0 x16 12 GB, GDDR6, 192 bit 1489 MHz 2000 MHz 2560 / 192 / 64
    Radeon RX 6800 XT Navi 21 Oct 28th, 2020 PCIe 4.0 x16 12 GB, GDDR6, 192 bit 1489 MHz 2000 MHz 3840 / 240 / 64
    Radeon RX 6900 XT Navi 21 Oct 28th, 2020PCIe 4.0 x1616 GB, GDDR6, 256 bit1350 MHz2000 MHz5120 / 320 / 96


    Fedora - A modern, free, and open source Operating System. https://getfedora.org/

    traveller, interloper, anomaly, iteration


  • blueturtle13blueturtle13 Member LegendaryPosts: 12,518
    edited September 28
    Torval said:
    Very cool. TechPowerUp has a similar and interesting take on those too. I'm still going to wait for benchmarks and comparisons and possibly the respins that come next year. It really depends on whether I can wait to build my next system or have to do it earlier.


    Product NameGPU ChipReleasedBusMemoryGPU clockMemory clockShaders / TMUs / ROPs
    Radeon RX 6500 XT Navi 23 Oct 28th, 2020 PCIe 4.0 x8 10 GB, GDDR6, 160 bit 1489 MHz 2000 MHz 2048 / 128 / 64
    Radeon RX 6700 XT Navi 22 Oct 28th, 2020 PCIe 4.0 x16 12 GB, GDDR6, 192 bit 1489 MHz 2000 MHz 2560 / 192 / 64
    Radeon RX 6800 XT Navi 21 Oct 28th, 2020 PCIe 4.0 x16 12 GB, GDDR6, 192 bit 1489 MHz 2000 MHz 3840 / 240 / 64
    Radeon RX 6900 XT Navi 21 Oct 28th, 2020PCIe 4.0 x1616 GB, GDDR6, 256 bit1350 MHz2000 MHz5120 / 320 / 96


    if that 6800 XT is $500 to $600ish it is going to be hard to pass up.
    If the 6900 XT is under $1000 even slightly it will really press Nvidia   
    Post edited by blueturtle13 on
    Torval

    거북이는 목을 내밀 때 안 움직입니다












  • Mars_OMGMars_OMG Member EpicPosts: 2,847
    Torval said:
    Very cool. TechPowerUp has a similar and interesting take on those too. I'm still going to wait for benchmarks and comparisons and possibly the respins that come next year. It really depends on whether I can wait to build my next system or have to do it earlier.


    Product NameGPU ChipReleasedBusMemoryGPU clockMemory clockShaders / TMUs / ROPs
    Radeon RX 6500 XT Navi 23 Oct 28th, 2020 PCIe 4.0 x8 10 GB, GDDR6, 160 bit 1489 MHz 2000 MHz 2048 / 128 / 64
    Radeon RX 6700 XT Navi 22 Oct 28th, 2020 PCIe 4.0 x16 12 GB, GDDR6, 192 bit 1489 MHz 2000 MHz 2560 / 192 / 64
    Radeon RX 6800 XT Navi 21 Oct 28th, 2020 PCIe 4.0 x16 12 GB, GDDR6, 192 bit 1489 MHz 2000 MHz 3840 / 240 / 64
    Radeon RX 6900 XT Navi 21 Oct 28th, 2020PCIe 4.0 x1616 GB, GDDR6, 256 bit1350 MHz2000 MHz5120 / 320 / 96


    if that 6800 XT is $500 to $600ish it is going to be hard to pass up.
    If the 6900 XT is under $1000 even slightly it will really press Nvidia   
    rx 6800xt might come in even lower , because AMD has never really had the luxury of showing there cards after nvidia.If RDN2 delivers, we could be looking at a completely new era of gpus. 
    blueturtle13
    - abandoning social media could possibly save the world.  

  • QuizzicalQuizzical Member LegendaryPosts: 22,379
    Torval said:
    Very cool. TechPowerUp has a similar and interesting take on those too. I'm still going to wait for benchmarks and comparisons and possibly the respins that come next year. It really depends on whether I can wait to build my next system or have to do it earlier.


    Product NameGPU ChipReleasedBusMemoryGPU clockMemory clockShaders / TMUs / ROPs
    Radeon RX 6500 XT Navi 23 Oct 28th, 2020 PCIe 4.0 x8 10 GB, GDDR6, 160 bit 1489 MHz 2000 MHz 2048 / 128 / 64
    Radeon RX 6700 XT Navi 22 Oct 28th, 2020 PCIe 4.0 x16 12 GB, GDDR6, 192 bit 1489 MHz 2000 MHz 2560 / 192 / 64
    Radeon RX 6800 XT Navi 21 Oct 28th, 2020 PCIe 4.0 x16 12 GB, GDDR6, 192 bit 1489 MHz 2000 MHz 3840 / 240 / 64
    Radeon RX 6900 XT Navi 21 Oct 28th, 2020PCIe 4.0 x1616 GB, GDDR6, 256 bit1350 MHz2000 MHz5120 / 320 / 96


    Those specs strike me as suspicious to the extent that they're almost certainly just a random person making things up.

    For starters, AMD almost invariably uses powers of 2 for the memory bus width.  The Radeon RX 5600 XT didn't, but that was the first exception since Tahiti way back in 2012.  That doesn't mean that AMD has to do so.  Nvidia commonly doesn't.  But for the top bin of a GPU die to have a 160-bit bus width has probably never happened.

    Next, releases of different GPUs tend to be staggered, not all on the same day.  Even different bins of the same GPU tend to have different launch days.  I can't think of a time that either AMD or Nvidia has ever launched two different GPU dies on the same day, let alone three.

    Third, who makes a desktop card with a PCI Express x8 connection?  It's pretty much always x16, with only the exception of the rare SKU of a low end card intended to go in an open x1 slot or something.

    The salvage part strategy there is also atypical.  There's a bigger gap in specs between the 6800 and 6900 than between the 6500 and 6600.  But the latter pair are supposedly different dies, while the former are two bins of the same die.  The only way that it makes sense for AMD to make Navi 22 and Navi 23 dies with that similar of specs is if Navi 23 is a custom part for one particular customer and won't be sold to the general public.  Apple is the only plausible customer that comes to mind, and even that isn't very plausible unless it's a laptop card--which also isn't plausible because that's way too much power for an Apple laptop.

    Even the two bins of Navi 21 strike me as suspicious.  I could maybe believe them if there's also a Radeon RX 6900 in between, but that would usually launch long before the 6800 XT.  The extreme salvage bin has to wait a while for AMD to see what yields look like in order to choose the specs.  But as the top two bins of a flagship GPU, they would be extremely atypical for AMD.

    The way that AMD usually handles their top two bins of a flagship GPU is that the top bin has a higher clock speed than the second bin, and the second bin disables much less than 1/4 of the compute units.  Meanwhile, both bins have the same bus width, though the second bin down sometimes clocks it lower.  That was true of Navi 10, Vega 10, Polaris 10, Fiji, Hawaii, Tahiti, Cayman, Cypress, RV770, RV670, and R600--the flagship of literally every generation that AMD has launched since buying ATI.  The fraction of compute units disabled on the second bin down on those chips was 1/10, 1/8, 1/9, 1/8, 1/11, 1/8, 1/12, 1/10, 0, 0, and 0.

    That doesn't mean that AMD can't go in a different direction this generation.  But it would be very atypical for them to suddenly disable twice as big of a fraction of the compute units as they've ever done before in a second bin.  It would be atypical for the second bin to be clocked higher than the top bin.  It would be atypical for the second bin to not have the full memory bus width.  And because of that, I'm skeptical that it is what AMD will actually do.
    laseritTorvalMars_OMG
  • TorvalTorval Member LegendaryPosts: 20,297
    edited September 28
    Those Tech PowerUp specs likely aren't completely accurate, like you point out. Some seem close to target though. It makes interesting speculation and food for thought until AMD releases cards and specs. That list of cards seems like an ever evolving 'wiki' of information.

    One thing I'm suspicious of in both sites posted above are the power specs. I don't think the 6900XT is going to be a 300W card. I'm very interested in how the power requirements will work out with performance.
    Fedora - A modern, free, and open source Operating System. https://getfedora.org/

    traveller, interloper, anomaly, iteration


  • VrikaVrika Member EpicPosts: 6,602
    Quizzical said:
    Torval said:
    Very cool. TechPowerUp has a similar and interesting take on those too. I'm still going to wait for benchmarks and comparisons and possibly the respins that come next year. It really depends on whether I can wait to build my next system or have to do it earlier.


    Product NameGPU ChipReleasedBusMemoryGPU clockMemory clockShaders / TMUs / ROPs
    Radeon RX 6500 XT Navi 23 Oct 28th, 2020 PCIe 4.0 x8 10 GB, GDDR6, 160 bit 1489 MHz 2000 MHz 2048 / 128 / 64
    Radeon RX 6700 XT Navi 22 Oct 28th, 2020 PCIe 4.0 x16 12 GB, GDDR6, 192 bit 1489 MHz 2000 MHz 2560 / 192 / 64
    Radeon RX 6800 XT Navi 21 Oct 28th, 2020 PCIe 4.0 x16 12 GB, GDDR6, 192 bit 1489 MHz 2000 MHz 3840 / 240 / 64
    Radeon RX 6900 XT Navi 21 Oct 28th, 2020PCIe 4.0 x1616 GB, GDDR6, 256 bit1350 MHz2000 MHz5120 / 320 / 96


    Those specs strike me as suspicious to the extent that they're almost certainly just a random person making things up....
    Those Techpowerup specs are

    Based off of speculation/reddit and other sources online.

    Placeholder
    I think we should ignore them. Techpowerup is openly enough admitting that they're just copying some internet rumor while waiting for real information.

    Newegg insider is a more trustworthy source because they could have insider information, but I think we still have to just wait and see.
     
  • TorvalTorval Member LegendaryPosts: 20,297
    Vrika said:
    Quizzical said:
    Torval said:
    Very cool. TechPowerUp has a similar and interesting take on those too. I'm still going to wait for benchmarks and comparisons and possibly the respins that come next year. It really depends on whether I can wait to build my next system or have to do it earlier.


    Product NameGPU ChipReleasedBusMemoryGPU clockMemory clockShaders / TMUs / ROPs
    Radeon RX 6500 XT Navi 23 Oct 28th, 2020 PCIe 4.0 x8 10 GB, GDDR6, 160 bit 1489 MHz 2000 MHz 2048 / 128 / 64
    Radeon RX 6700 XT Navi 22 Oct 28th, 2020 PCIe 4.0 x16 12 GB, GDDR6, 192 bit 1489 MHz 2000 MHz 2560 / 192 / 64
    Radeon RX 6800 XT Navi 21 Oct 28th, 2020 PCIe 4.0 x16 12 GB, GDDR6, 192 bit 1489 MHz 2000 MHz 3840 / 240 / 64
    Radeon RX 6900 XT Navi 21 Oct 28th, 2020PCIe 4.0 x1616 GB, GDDR6, 256 bit1350 MHz2000 MHz5120 / 320 / 96


    Those specs strike me as suspicious to the extent that they're almost certainly just a random person making things up....
    Those Techpowerup specs are

    Based off of speculation/reddit and other sources online.

    Placeholder
    I think we should ignore them. Techpowerup is openly enough admitting that they're just copying some internet rumor while waiting for real information.

    Newegg insider is a more trustworthy source because they could have insider information, but I think we still have to just wait and see.

    You really like to handwave away things that make you uncomfortable. Of course it's speculation. AMD hasn't released the info yet, that should be obvious common sense. Speculation is fun because of the kind of analytical posts Quizzical made explaining why he thinks some of those points are way off base. On the other hand, some of those specs match exactly the Newegg Insider report and yet you want to dismiss it. I learned a lot fom Quizzical's explanation.

    Any information under NDA can't be shared without potential legal complications and it should all be taken with a grain of salt. Jay and MLD both are careful to explain this when speculating.

    Regardless of the details AMD is coming out of the gate quite strong this season and that sort of competition is good for consumers. People doubted their position with Intel too and their CPU market is crushing it.

    I seriously doubt both Sony and Microsoft would have chosen AMD for their internals this console gen if they were weak.
    Mars_OMG
    Fedora - A modern, free, and open source Operating System. https://getfedora.org/

    traveller, interloper, anomaly, iteration


  • CleffyCleffy Member RarePosts: 6,274
    edited September 28
    I really expect AMD to eventually come out with a multi-chiplet GPU and just scale the GPU linearly until a certain TDP is met. But the process may be more involved than it was with Ryzen and it may have issues that a GPU faces which a CPU wouldn't.
    Now that I think about it GDDR6 makes sense here for consumer cards.
  • QuizzicalQuizzical Member LegendaryPosts: 22,379
    Torval said:
    Vrika said:
    Quizzical said:
    Torval said:
    Very cool. TechPowerUp has a similar and interesting take on those too. I'm still going to wait for benchmarks and comparisons and possibly the respins that come next year. It really depends on whether I can wait to build my next system or have to do it earlier.


    Product NameGPU ChipReleasedBusMemoryGPU clockMemory clockShaders / TMUs / ROPs
    Radeon RX 6500 XT Navi 23 Oct 28th, 2020 PCIe 4.0 x8 10 GB, GDDR6, 160 bit 1489 MHz 2000 MHz 2048 / 128 / 64
    Radeon RX 6700 XT Navi 22 Oct 28th, 2020 PCIe 4.0 x16 12 GB, GDDR6, 192 bit 1489 MHz 2000 MHz 2560 / 192 / 64
    Radeon RX 6800 XT Navi 21 Oct 28th, 2020 PCIe 4.0 x16 12 GB, GDDR6, 192 bit 1489 MHz 2000 MHz 3840 / 240 / 64
    Radeon RX 6900 XT Navi 21 Oct 28th, 2020PCIe 4.0 x1616 GB, GDDR6, 256 bit1350 MHz2000 MHz5120 / 320 / 96


    Those specs strike me as suspicious to the extent that they're almost certainly just a random person making things up....
    Those Techpowerup specs are

    Based off of speculation/reddit and other sources online.

    Placeholder
    I think we should ignore them. Techpowerup is openly enough admitting that they're just copying some internet rumor while waiting for real information.

    Newegg insider is a more trustworthy source because they could have insider information, but I think we still have to just wait and see.

    You really like to handwave away things that make you uncomfortable. Of course it's speculation. AMD hasn't released the info yet, that should be obvious common sense. Speculation is fun because of the kind of analytical posts Quizzical made explaining why he thinks some of those points are way off base. On the other hand, some of those specs match exactly the Newegg Insider report and yet you want to dismiss it. I learned a lot fom Quizzical's explanation.

    Any information under NDA can't be shared without potential legal complications and it should all be taken with a grain of salt. Jay and MLD both are careful to explain this when speculating.

    Regardless of the details AMD is coming out of the gate quite strong this season and that sort of competition is good for consumers. People doubted their position with Intel too and their CPU market is crushing it.

    I seriously doubt both Sony and Microsoft would have chosen AMD for their internals this console gen if they were weak.
    Whenever you have claimed leaks, the question is whether they're real leaks or just some random person making things up.  If they're real leaks shortly before launch, the specs will be exactly right, at least up to last-minute changing of the clock speeds.  The number of physical compute units and memory bus width and so forth was pretty much set in stone when the chip taped out about a year before launch.  Binning can change, though AMD nearly always makes their top bin a fully functional die.
  • QuizzicalQuizzical Member LegendaryPosts: 22,379
    Cleffy said:
    I really expect AMD to eventually come out with a multi-chiplet GPU and just scale the GPU linearly until a certain TDP is met. But the process may be more involved than it was with Ryzen and it may have issues that a GPU faces which a CPU wouldn't.
    Now that I think about it GDDR6 makes sense here for consumer cards.
    That would probably burn way too much power.  GPUs need massively more bandwidth internally than CPUs.  For example, the RTX 3090 has over 300 TB/sec of register bandwidth.  You know how they give TFLOPS numbers?  Each fma operation takes 16 bytes of accesses to registers (three 4-byte reads and one 4-byte write) and counts as two operations, so you can multiply the TFLOPS number by 8 to get the reigster bandwidth that it's using.

    Obviously, you're not going to jump to a different chiplet to read registers.  Each compute unit would be entirely within a single chiplet.  But you probably are jumping to a different chiplet to read L2 cache.  I don't know what the L2 cache bandwidth of modern GPUs is, but it's probably on the order of several TB/sec.

    A GPU's L2 cache is very different from a CPU's L2 cache.  On a GPU, the L2 cache is very high latency (hundreds of clock cycles), and really exists only to reduce memory bandwidth requirements.  L2 cache reads are about what you'd expect:  when a GPU tries to read something from global memory (the off-chip DRAM), check first to see if it's in L2 cache, and only actually go off the chip if it's not.  And when you do have to go off the chip, store whatever comes back in L2 cache for a while so that it will be in cache if someone else wants the same data immediately.

    L2 cache as a buffer for global memory writes is also important, though this doesn't necessarily have to use very much space.  It allows a compute unit to say, here's data that I want to write to global memory, pass it off to the L2 cache, and then the compute unit can forget about it and move on.  Perhaps more importantly, the L2 cache allows write coalescence, where different threads simultaneously write to different parts of the same 128-byte cache line.  That all gets pieced together by a single 128-byte global memory write by the write coalescers.  Having different threads (preferably within a warp) each simultaneously write their own 8 or 16 byte chunk of a 128-byte cache line is actually the optimal way to write data to global memory on a GPU.  But you can't piece together the various chunks unless you have them all in the same place at once, and I'm pretty sure that that place is in L2 cache.

    The problem is that any compute unit could need to access any memory controller on a GPU, and hence any bank of L2 cache.  For that matter, it's generally optimal if each compute unit accesses all of the memory channels equally.  If you have four chiplets, each of which has 1 TB/sec of L2 cache bandwidth on the chip, then 3/4 of your L2 cache accesses have to jump to a different chiplet.  That means you've got 3 TB/sec of bandwidth connecting chiplets just for L2 cache alone, or more if some of them take multiple hops.

    While that can be done, it's going to use quite a bit of power.  It already uses substantial amounts of power to transfer that bandwidth within the same chip.  While it's a lot less than the register bandwidth, registers are physically right next to the shader bank that accesses them.  L2 cache accesses typically have to go about halfway across the chip, and that's a lot more expensive per bit.  Make that data jump between chiplets and you increase the power per bit of data by about a factor of 10 or so.

    But Ryzen, you say?  Even AMD's latest EPYC Rome with its 512-bit memory bus and 128 PCI Express channels is only transferring data between chiplets at a few hundred GB/sec under very heavy loads, and transferring that data is a considerable chunk of the chip's TDP.  Transferring a GPU's L2 cache bandwidth between chiplets is going to need an order of magnitude more bandwidth and hence power.

    Rumors (which might be wrong) say that AMD actually goes in the opposite direction with big Navi, giving it an enormous L2 cache so that it can get away with having the same memory bus and memory standard as a Radeon RX 5700 XT, but with the memory just clocked a little higher.  I'm not sure how well that would work.  For compute, the answer would usually be "badly", but for gaming, it might plausibly work pretty well.  It's certainly much lower power to grab your data from L2 cache than to go off the chip to GDDR6 or even HBM2.  But that only works if you're repeatedly reading the same data, but not spreading it out so much that it can't already fit in today's smaller L2 caches.

    If it seems unlikely that making your L2 cache 8 times as big could cut global memory bandwidth requirements in half, then well, for most GPU compute purposes, that's just not going to work.  For games, it might, though, because of how games use their memory bandwidth.  Indeed, that was the reasoning behind the Xbox One having a 32 MB ESRAM cache.  The problem was that that cache took up way too much space on a 28 nm process node.  On a 7 nm process node, a 16 MB or 32 MB L2 cache might make sense.

    There are three major places that GPUs access global memory as part of the graphics pipeline.  The first, reading in the vertex data for a model, could be cached if you're drawing the same model a bunch of times in the same frame, but is otherwise going to have to go off the chip.  If it's not already cached in L2, a bigger L2 cache is rarely going to help here.

    The second, accessing textures, is already cached quite a bit, but could be cached better with more space.  If a given texture is 1 MB in size and a compute unit's texture cache has 64 KB of space, the cache works if you're always accessing the same small portion of the texture (which happens quite a bit), but fails if you go to a different portion.  If you can stick the whole texture in L2 cache, you won't have to go back off the chip until you need a different texture.

    The third is the rendering buffer and depth buffer that you're drawing.  Those are each about 8 MB at a 1920x1080 resolution, but a large chunk of a GPU's global memory bandwidth because every single pixel that gets drawn has to write its output.  You commonly access the same pixel several times in a frame because one thing is on top of another.  Even with a painter's algorithm approach, you're still accessing the depth buffer for a given pixel several times to figure out what is on top.  If you could stick the whole buffer in L2 cache rather than constantly going off the chip, that could save a ton of bandwidth.

    Some GPUs already try to do that partially via tiled rendering.  The idea is that, while you can't fit an 8 MB depth buffer in a 2 MB L2 cache, you can fit part of it.  So break the screen into tiles and first figure out which models draw something on which tiles.  Then draw each tile one at a time, with its entire chunk of the buffers fitting neatly into L2 cache.

    The problem is that some models cross tiles.  Some parts of rendering those tiles thus has to be done repeatedly, once for each tile that the model is on.  If you can split the screen into fewer tiles, then you have less data to keep track of in which model hits which tiles, as well as having to replicate less work for different tiles.  And how do you do that?  By having more L2 cache.
  • TorvalTorval Member LegendaryPosts: 20,297
    Quizzical said:
    Torval said:
    Vrika said:
    Quizzical said:
    Torval said:
    Very cool. TechPowerUp has a similar and interesting take on those too. I'm still going to wait for benchmarks and comparisons and possibly the respins that come next year. It really depends on whether I can wait to build my next system or have to do it earlier.


    Product NameGPU ChipReleasedBusMemoryGPU clockMemory clockShaders / TMUs / ROPs
    Radeon RX 6500 XT Navi 23 Oct 28th, 2020 PCIe 4.0 x8 10 GB, GDDR6, 160 bit 1489 MHz 2000 MHz 2048 / 128 / 64
    Radeon RX 6700 XT Navi 22 Oct 28th, 2020 PCIe 4.0 x16 12 GB, GDDR6, 192 bit 1489 MHz 2000 MHz 2560 / 192 / 64
    Radeon RX 6800 XT Navi 21 Oct 28th, 2020 PCIe 4.0 x16 12 GB, GDDR6, 192 bit 1489 MHz 2000 MHz 3840 / 240 / 64
    Radeon RX 6900 XT Navi 21 Oct 28th, 2020PCIe 4.0 x1616 GB, GDDR6, 256 bit1350 MHz2000 MHz5120 / 320 / 96


    Those specs strike me as suspicious to the extent that they're almost certainly just a random person making things up....
    Those Techpowerup specs are

    Based off of speculation/reddit and other sources online.

    Placeholder
    I think we should ignore them. Techpowerup is openly enough admitting that they're just copying some internet rumor while waiting for real information.

    Newegg insider is a more trustworthy source because they could have insider information, but I think we still have to just wait and see.

    You really like to handwave away things that make you uncomfortable. Of course it's speculation. AMD hasn't released the info yet, that should be obvious common sense. Speculation is fun because of the kind of analytical posts Quizzical made explaining why he thinks some of those points are way off base. On the other hand, some of those specs match exactly the Newegg Insider report and yet you want to dismiss it. I learned a lot fom Quizzical's explanation.

    Any information under NDA can't be shared without potential legal complications and it should all be taken with a grain of salt. Jay and MLD both are careful to explain this when speculating.

    Regardless of the details AMD is coming out of the gate quite strong this season and that sort of competition is good for consumers. People doubted their position with Intel too and their CPU market is crushing it.

    I seriously doubt both Sony and Microsoft would have chosen AMD for their internals this console gen if they were weak.
    Whenever you have claimed leaks, the question is whether they're real leaks or just some random person making things up.  If they're real leaks shortly before launch, the specs will be exactly right, at least up to last-minute changing of the clock speeds.  The number of physical compute units and memory bus width and so forth was pretty much set in stone when the chip taped out about a year before launch.  Binning can change, though AMD nearly always makes their top bin a fully functional die.
    I'm not sure why you're telling me that or what you're trying to say because I already know and understand that.

    I don't think Tech PowerUp is claiming anything with their wiki compilation of amalgamated internet info, if that's what you're saying. I just threw some stuff out there for us to chat about, because let's be honest, we're not really revealing anything astounding or proving anything. To me this forum is like a fireside chat over what's happening in the industry and what other posters here think about it.

    The OP posted some leak. I posted the junk TPU had collected from various sources so we could chat about what they got right and where we think they're way off base. Like PCIE x8 being the pipe for the 6500XT. I thought that was weird too because only a few cards last year were released that have an x8 interface and they were low end budget deals.

    I didn't realize some people would be so sensitive and get so worked up over it.
    Mars_OMG
    Fedora - A modern, free, and open source Operating System. https://getfedora.org/

    traveller, interloper, anomaly, iteration


  • Mars_OMGMars_OMG Member EpicPosts: 2,847
    Quizzical said:
    Torval said:
    Vrika said:
    Quizzical said:
    Torval said:
    Very cool. TechPowerUp has a similar and interesting take on those too. I'm still going to wait for benchmarks and comparisons and possibly the respins that come next year. It really depends on whether I can wait to build my next system or have to do it earlier.


    Product NameGPU ChipReleasedBusMemoryGPU clockMemory clockShaders / TMUs / ROPs
    Radeon RX 6500 XT Navi 23 Oct 28th, 2020 PCIe 4.0 x8 10 GB, GDDR6, 160 bit 1489 MHz 2000 MHz 2048 / 128 / 64
    Radeon RX 6700 XT Navi 22 Oct 28th, 2020 PCIe 4.0 x16 12 GB, GDDR6, 192 bit 1489 MHz 2000 MHz 2560 / 192 / 64
    Radeon RX 6800 XT Navi 21 Oct 28th, 2020 PCIe 4.0 x16 12 GB, GDDR6, 192 bit 1489 MHz 2000 MHz 3840 / 240 / 64
    Radeon RX 6900 XT Navi 21 Oct 28th, 2020PCIe 4.0 x1616 GB, GDDR6, 256 bit1350 MHz2000 MHz5120 / 320 / 96


    Those specs strike me as suspicious to the extent that they're almost certainly just a random person making things up....
    Those Techpowerup specs are

    Based off of speculation/reddit and other sources online.

    Placeholder
    I think we should ignore them. Techpowerup is openly enough admitting that they're just copying some internet rumor while waiting for real information.

    Newegg insider is a more trustworthy source because they could have insider information, but I think we still have to just wait and see.

    You really like to handwave away things that make you uncomfortable. Of course it's speculation. AMD hasn't released the info yet, that should be obvious common sense. Speculation is fun because of the kind of analytical posts Quizzical made explaining why he thinks some of those points are way off base. On the other hand, some of those specs match exactly the Newegg Insider report and yet you want to dismiss it. I learned a lot fom Quizzical's explanation.

    Any information under NDA can't be shared without potential legal complications and it should all be taken with a grain of salt. Jay and MLD both are careful to explain this when speculating.

    Regardless of the details AMD is coming out of the gate quite strong this season and that sort of competition is good for consumers. People doubted their position with Intel too and their CPU market is crushing it.

    I seriously doubt both Sony and Microsoft would have chosen AMD for their internals this console gen if they were weak.
    Whenever you have claimed leaks, the question is whether they're real leaks or just some random person making things up.  If they're real leaks shortly before launch, the specs will be exactly right, at least up to last-minute changing of the clock speeds.  The number of physical compute units and memory bus width and so forth was pretty much set in stone when the chip taped out about a year before launch.  Binning can change, though AMD nearly always makes their top bin a fully functional die.
    I've been enjoying these conversions, so please keep a open minds on possibilities instead of trying to be "right" :) 
    - abandoning social media could possibly save the world.  

Sign In or Register to comment.