Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!

Micron expects to introduce GDDR6 around the end of this year. Will it matter?

QuizzicalQuizzical Member LegendaryPosts: 25,347
http://www.anandtech.com/show/11100/micron-2017-analyst-conference-roadmap-updates-forecasts-and-ceo-retiring

There are a lot of things in that article, most of which I'm not interested in here.  But I am interested that Micron is working on GDDR6, which presumably means that they don't expect HBM2 to take over as the memory standard of choice for GPUs.

As a little background, there are three major memory manufacturers in the world.  In the competition between GDDR5X and HBM2 memory standards, Samsung and Hynix bet on HBM2, while Micron bet on GDDR5X.  HBM2 offers higher bandwidth for less power than GDDR5X, but it also comes with a higher cost.

So far, the GDDR5X bet is working out pretty well for Micron.  It's available in the GeForce GTX 1080 and has been for months, while there aren't yet any commercially available HBM2 parts.  They're coming, both in the GP100 chip that Nvidia has announced for the Tesla P100 and in AMD Vega.  But they're not here yet.

AMD has said that HBM2 will start out at the top and and work its way down to cheaper markets.  That makes sense, as if you ignore cost, HBM2 is clearly the superior technology.  Professional markets (Quadro, Tesla, FirePro) aren't very sensitive to production cost, and even a $600 consumer GPU can readily absorb an extra $10 or $20 in cost of production if it gives you a clearly superior product.

The real question is, how low in the product stack will HBM2 go and how quickly?  Will it be limited to the top end GPU only, or will it work its way down to the midrange?  That Micron is working on GDDR6 surely means that they expect either Nvidia or AMD to buy it, and possibly both.

AMD has said that Vega will feature HBM2.  Does that mean only the higher end Vega parts, or the entire Vega lineup?  I interpreted it as being the latter.  But will Vega itself only be for the high end, or will it fill the whole range of products?  If the cheapest Vega-based Radeon card is a $400 card that is much faster than a Radeon RX 480, then that leaves plenty of room for GDDR5X or GDDR6 in the future $100-$300 market.  If Vega is going to replace all of Polaris with HBM2 cards even at the $100 price point, then it's hard to see AMD ever adopting GDDR5X or GDDR6.

And what about APUs?  Will Raven Ridge have a Polaris-based GPU, Vega-based, or is there not much of a difference other than the memory standard?  Will future APUs have HBM2?  Even APUs without HBM2 are unlikely to use GDDR5X or GDDR6, as the power consumption is a huge problem in laptops.

And what about Nvidia?  Will Nvidia bring GDDR5X to lower cards in their product stack?  Will there be a new generation of Pascal cards that adopt GDDR5X or perhaps GDDR6?  What will Volta use?  Surely the top end Volta will need HBM2 to be competitive, and it's hard to see Nvidia abandoning HBM2 after they used it for GP100 in Pascal.  But just because the top end GPU in a generation needs a particular type of memory doesn't automatically mean that they all do.

Once HBM2 has been out for a while and is more mature, the price difference between it and GDDR5X or GDDR6 will probably diminish.  Building the big silicon interposer on a process node designed for complex logic, even if it's a very old process node, is much more expensive than building a silicon interposer on a process node designed purely for cheap silicon interposers.  HBM2 stacks also need an extra logic chip at the bottom of each stack in addition to the memory chips, but I don't know how expensive that is.

Indeed, if at some point, HBM2 is mature and GDDR6 is not, then it's not automatic that GDDR6 will be cheaper at first.  It's highly probable that HBM2 will be better if you ignore cost.  And then what happens to GDDR6?

Comments

  • xyzercrimexyzercrime Member RarePosts: 878
    I doubt general consumers level can afford HBM2. As for GDDR6, I thought there always be segmentation that demands cheaper products for whatever reasons even though HBM2 maybe will win in price per value ratio (again, maybe this will happen at some point, just like you said Quiz).



    When you don't want the truth, you will make up your own truth.
  • QuizzicalQuizzical Member LegendaryPosts: 25,347
    Torval said:
    By the way I was expecting to see an article on the unlocked core i3 from you before the Micron update which was sort of soft on details.
    It's nice that Intel has finally decided that dual core processors are allowed to clock high, rather than artificially crippling them as they have ever since Sandy Bridge.  Maybe Intel felt burned by people going with a higher clocked Core 2 Duo over a lower clocked Core 2 Quad in the era before turbo.

    But I don't think a $168 desktop dual core is all that interesting.  Depending on how it's clocked and priced, Ryzen might manage to make it look downright silly for consumer use.
  • RidelynnRidelynn Member EpicPosts: 7,383
    Well, Micron does have a good point. New memory standards aren't always adopted overnight, and a few that looked very promising outright failed to get adopted (RamBus comes to mind, XPoint may well follow suit reading into the tone of this article).

    HBM2 is not out yet in quantity, it's still not proven itself. There's still a chance it could have technical issues and won't be able to catch on (manufacturing issues keeps prices and availability up, technical issues mean it isn't nearly as fast as expected, etc). Right now all we really have are some tech samples from AMD/nVidia and some white papers.

    DDR is the safe bet, it's a known "brand name" (it's not really a brand name, but people have seen that acronym for years). And we don't know anything about GDDR6, other than a lot of comparisons that are drawn between HBM2 and GDDR5X. So there are a lot of ~ifs~ in there that, in absence of any data about GDDR6, I don't see how anyone can say it's good or bad, or even that it competes against HBM2 in any way.

    I also agree with Torval, the emphasis right now is very much on NAND rather than DRAM. That doesn't really do anything for graphics cards (yet).
  • QuizzicalQuizzical Member LegendaryPosts: 25,347
    AMD used HBM in Fiji, so it's not like HBM is completely new, either.  Nvidia has now announced the Quadro GP100 with HBM2 to be available in March of this year.  Between that and Vega, it sounds like HBM2 is very much on the way, at least in 4-high stacks.  There aren't yet any announced or even credibly rumored products using GDDR6.  (There were some rumored products with GDDR6 to launch years ago that looked like random people making stuff up, but those aren't credible rumors.)

    The basic problem that the GDDR* lines have to cope with that HBM solves is that you can really only have so many pins or balls or whatever coming out of a GPU if they all have to have traces on the PCB.  The cost to make a 1024-bit GDDR5 memory bus would at minimum be exorbitant and might not be possible even if you wanted to.  Having the memory in the same package as the GPU chip and connected via a silicon interposer means you can have as wide of a bus as you want and it's no big deal.

    A wider memory bus means you can get the same amount of memory bandwidth with a much, much lower clock speed.  And that's almost certainly going to reduce your power consumption and probably by a lot.  So I would be extremely shocked if GDDR6 is competitive with HBM2 in performance per watt unless it's delayed by so long that it has to compete with HBM3, not HBM2.  It will likely be cheaper, though, and it's only a question of by how much.

    I also find it interesting that even today, neither Samsung nor Hynix have anything on their web site about GDDR5X.  Both will tell you plenty about GDDR5 or even HBM2, but nothing about GDDR5X.  We're already more than 2/3 of a year past the GTX 1080 launching with GDDR5X, and they're still not even willing to talk about sampling it.  This makes me wonder if they're betting on HBM2 as the future and GDDR* going away, while Micron is betting that there will be plenty of GDDR5X and GDDR6 with HBM2 limited to the high end.

    You'd better believe that AMD and Nvidia are in contact with Samsung, Hynix, and Micron about their plans to use future memory standards.  Indeed, AMD had a huge role in the development of HBM and GDDR5.  The memory manufacturers surely know a lot about AMD's and Nvidia's future plans that hasn't been publicly announced, and are far more eager to supply memory standards that AMD and Nvidia will buy in huge volumes than stuff that no one wants.
  • RidelynnRidelynn Member EpicPosts: 7,383
    Quizzical said:
    AMD used HBM in Fiji, so it's not like HBM is completely new, either.  
    That's the same thing as saying that ~everyone~ uses GDDR is something, so GDDR6 isn't completely new. You can't say that because HBM exists, that HBM2 by extension will exist and work well. 

    I don't know anything about GDDR6, I don't know if it's completely new or derivative by what degree or if it's just a new marketing term for something already out.

    Sure, there's a lot of speculation, and there's a lot of product announcements hyping HBM2 right now. I hope all that turns out to be true, but none of those are shipping yet. Maybe in March, which is when GP100 release date was just announced for; it looks to be the first product shipping with HBM2, and we can get a fuller picture of it's capabilities in a real product (for those willing or able to spend thousands on a Quadro GPU).

    Pascal's top-tier cards (GP102 & GP104) were originally rumored to be HBM2 cards, and everyone was shocked when they were announced as GDDR5X. GP100 (big pascal) was supposed to be either end-of-year 2016, or January 2017. That's just announced to ship in March, and has slid back some now. Remember that Fiji was rumored to get an HBM2 variant flagship to launch before Polaris & Vega. That didn't happen either. These projects could be getting changed/pushed/canceled due to any number of reasons, but it would be easy to speculate that the common denominator of HBM2 is the culprit.
  • HrimnirHrimnir Member RarePosts: 2,415
    Quizzical said:
    http://www.anandtech.com/show/11100/micron-2017-analyst-conference-roadmap-updates-forecasts-and-ceo-retiring

    There are a lot of things in that article, most of which I'm not interested in here.  But I am interested that Micron is working on GDDR6, which presumably means that they don't expect HBM2 to take over as the memory standard of choice for GPUs.

    As a little background, there are three major memory manufacturers in the world.  In the competition between GDDR5X and HBM2 memory standards, Samsung and Hynix bet on HBM2, while Micron bet on GDDR5X.  HBM2 offers higher bandwidth for less power than GDDR5X, but it also comes with a higher cost.

    So far, the GDDR5X bet is working out pretty well for Micron.  It's available in the GeForce GTX 1080 and has been for months, while there aren't yet any commercially available HBM2 parts.  They're coming, both in the GP100 chip that Nvidia has announced for the Tesla P100 and in AMD Vega.  But they're not here yet.

    AMD has said that HBM2 will start out at the top and and work its way down to cheaper markets.  That makes sense, as if you ignore cost, HBM2 is clearly the superior technology.  Professional markets (Quadro, Tesla, FirePro) aren't very sensitive to production cost, and even a $600 consumer GPU can readily absorb an extra $10 or $20 in cost of production if it gives you a clearly superior product.

    The real question is, how low in the product stack will HBM2 go and how quickly?  Will it be limited to the top end GPU only, or will it work its way down to the midrange?  That Micron is working on GDDR6 surely means that they expect either Nvidia or AMD to buy it, and possibly both.

    AMD has said that Vega will feature HBM2.  Does that mean only the higher end Vega parts, or the entire Vega lineup?  I interpreted it as being the latter.  But will Vega itself only be for the high end, or will it fill the whole range of products?  If the cheapest Vega-based Radeon card is a $400 card that is much faster than a Radeon RX 480, then that leaves plenty of room for GDDR5X or GDDR6 in the future $100-$300 market.  If Vega is going to replace all of Polaris with HBM2 cards even at the $100 price point, then it's hard to see AMD ever adopting GDDR5X or GDDR6.

    And what about APUs?  Will Raven Ridge have a Polaris-based GPU, Vega-based, or is there not much of a difference other than the memory standard?  Will future APUs have HBM2?  Even APUs without HBM2 are unlikely to use GDDR5X or GDDR6, as the power consumption is a huge problem in laptops.

    And what about Nvidia?  Will Nvidia bring GDDR5X to lower cards in their product stack?  Will there be a new generation of Pascal cards that adopt GDDR5X or perhaps GDDR6?  What will Volta use?  Surely the top end Volta will need HBM2 to be competitive, and it's hard to see Nvidia abandoning HBM2 after they used it for GP100 in Pascal.  But just because the top end GPU in a generation needs a particular type of memory doesn't automatically mean that they all do.

    Once HBM2 has been out for a while and is more mature, the price difference between it and GDDR5X or GDDR6 will probably diminish.  Building the big silicon interposer on a process node designed for complex logic, even if it's a very old process node, is much more expensive than building a silicon interposer on a process node designed purely for cheap silicon interposers.  HBM2 stacks also need an extra logic chip at the bottom of each stack in addition to the memory chips, but I don't know how expensive that is.

    Indeed, if at some point, HBM2 is mature and GDDR6 is not, then it's not automatic that GDDR6 will be cheaper at first.  It's highly probable that HBM2 will be better if you ignore cost.  And then what happens to GDDR6?


    I think Micron is wise for "betting" on GDDR5X / 6 for the next couple of years.  As you stated, it's going to take a little while for the manufacturing processes to mature enough to make the price difference between HBM2 and GDDR negligible.  Until that point, and honestly I suspect even after, you will likely still see GDDR in mid and low end cards and it will probably be 5+ years before you start seeing HBM with regularity on sub $200 parts.  I could and probably will be wrong, but who knows.

    The reality is memory bandwidth is not nearly as important for gaming situations as it is for compute/research type situations.  Perhaps that will change once developers know they have access to more memory bandwidth and can code the graphics engines accordingly, but, that's a slow process, especially with consoles being a perpetual weight tied to the ankle of innovation.

    "The surest way to corrupt a youth is to instruct him to hold in higher esteem those who think alike than those who think differently."

    - Friedrich Nietzsche

  • QuizzicalQuizzical Member LegendaryPosts: 25,347
    Hrimnir said:


    I think Micron is wise for "betting" on GDDR5X / 6 for the next couple of years.  As you stated, it's going to take a little while for the manufacturing processes to mature enough to make the price difference between HBM2 and GDDR negligible.  Until that point, and honestly I suspect even after, you will likely still see GDDR in mid and low end cards and it will probably be 5+ years before you start seeing HBM with regularity on sub $200 parts.  I could and probably will be wrong, but who knows.

    The reality is memory bandwidth is not nearly as important for gaming situations as it is for compute/research type situations.  Perhaps that will change once developers know they have access to more memory bandwidth and can code the graphics engines accordingly, but, that's a slow process, especially with consoles being a perpetual weight tied to the ankle of innovation.

    My argument is not so much that AMD and Nvidia will use this or that.  Rather, it's that AMD and Nvidia have surely told Samsung, Hynix, and Micron what they will use over the course of the next few years, and the memory vendors made their choices on what to build based on that.

    If both AMD and Nvidia told the memory vendors that they were moving to HBM2 quickly and not interested in GDDR5X or GDDR6 once HBM2 was available, GDDR6 would have been scrapped and it's likely that no one would have bothered with GDDR5X.  If both AMD and Nvidia told the memory vendors that HBM2 would be relegated to $4000+ compute cards for the next few years and they wanted GDDR5X and/or GDDR6 for their consumer graphics cards that provide most of the volume, all three vendors would surely be working on the new GDDR* standards.

    It's possible that Samsung and/or Hynix are working on GDDR6 today but haven't talked about it publicly.  That they haven't bothered with GDDR5X makes me believe that it's not long for this world.  It's entirely possible that all of the GDDR5X GPUs (by die not bin) that will ever exist are already on the market and future cards will use GDDR6, HBM2, or other, later technologies.  If that's so, then it makes sense for Samsung and Hynix to ignore it.

    It's also very possible that AMD and Nvidia will take different routes here.  For example, AMD might move everything to HBM2 while Nvidia adopts GDDR6 outside of the high end, or vice versa.

    You say that bandwidth isn't as important for gaming as for compute/research.  But once you move away from graphics, the needs vary wildly by what you're doing.  There are indeed some algorithms where whoever has the most memory bandwidth wins and the GPU chip otherwise basically doesn't matter.  There are others where a top end GPU paired with a single channel of DDR3 would have more bandwidth than it needs, and even some where memory bandwidth needs of a high end GPU are more naturally measured in KB/s than GB/s.  Sloppy coding tends to inflate memory bandwidth requirements, though for that, I'm inclined to say learn to program GPUs well rather than paying several times what you need for the hardware.

    Sometimes the bandwidth needs actually vary wildly by hardware.  AMD and Nvidia put tons of caches on their GPUs to reduce bandwidth requirements; if you add up all the caches on Fiji, it comes to about 24 MB per die.  But an algorithm that fits one cache hierarchy well and another not at all may be able to keep stuff on die on one GPU while having to go to global memory on another.

    And bandwidth matters for graphics, too.  Otherwise GeForce and Radeon cards would just use DDR3 and call it a day, or maybe move to DDR4 by now.  Nvidia wouldn't have shelled out for GDDR5X rather than GDDR5 in the GTX 1080 if they thought 256 GB/s was enough.  And that's even after one of the big gains on Maxwell (inherited by Pascal) was a rumored restructuring of things to get a lot more use out of L2 cache and considerably reduce memory bandwidth requirements for graphics.
  • QuizzicalQuizzical Member LegendaryPosts: 25,347
    Hrimnir said:

    Until that point, and honestly I suspect even after, you will likely still see GDDR in mid and low end cards and it will probably be 5+ years before you start seeing HBM with regularity on sub $200 parts.

    Actually, I want to focus a little more on that line.  What makes you think that there will still be new sub-$200 GPUs 5+ years from now?  It's possible, but I'd bet against it.  Of course there will still be sub-$200 GPUs available then, but a lot of the options will likely be things that you could buy today.  There might be additional options that launch over the course of the next few years, but that could be it.

    It used to be that both AMD and Nvidia made new sub-$50 GPUs every generation.  Today, if you want a sub-$50 discrete GPU, your choices are so old that they're off driver support.  If you want a new sub-$100 discrete GPU, then outside of finding a normally $100+ GPU on sale for slightly under $100, your newest options are chips that launched in 2012 (Nvidia) or early 2013 (AMD).

    What happened?  Two things.  First, integrated graphics got better.  When the integrated graphics that comes with your CPU has 6 compute units, there's no need to buy a discrete GPU of the same architecture with only 2.

    Second, the die sizes necessary to get decent performance have shrunk considerably, which reduces the point of new GPUs in the same performance range.  For example, the G94 die in the GeForce GT 9600 was 240 mm^2.  It was the third smallest die in the GeForce 9000 generation of 2008.  And Nvidia had to sell the completed GPUs for $80, though that was in part because AMD had a better architecture in the Radeon HD 4000 series and started a price war.

    Today, that same die size is larger than Polaris 10 (Radeon RX 480) or GP106 (GeForce GTX 1060).  If a few years from now, AMD and Nvidia are selling those cards for $100 each, they can still make money on that--and enough that there might not be a point in making a new card on a 7 nm process node with about the same performance.  If Nvidia had to sell the GeForce GT 9600 for $30, they'd lose money on every card they sold and sooner just abandon that market segment.

    Reduced power consumption has long been a reason to care about die shrinks.  But the difference between 100 W and 50 W doesn't much matter in a desktop.  In a laptop it does, but I'd have to believe that by 5+ years from now, someone will have put HBM2 (or HBM3 or whatever) on package and there will no longer be a point to 50 W discrete laptop GPUs.
  • RidelynnRidelynn Member EpicPosts: 7,383
    Yeah, the <$100 GPU market is already gone. It won't be long before the <$130 market is gone (currently being served by the RX460 and GT1050's), I wouldn't be surprised if they are swallowed by IGP inside the next two generations of CPU/GPUs.

    Discrete GPUs are already all but gone in all but the highest end laptops, and that's the direction I think even desktops will take - the high end will retain them (those people who are currently buying SLI, the 1070+'s, and Fiji/Vega cards), but beyond that, IGP (or something similar) will be more than sufficient for driving most gaming loads for most people.
  • MalaboogaMalabooga Member UncommonPosts: 2,977
    edited February 2017
    GDDRX5 has barealy taken off (and its now clear that it wasnt ready back in May) and "endo of this year, or next year" pretty much means that ramped up GDDR6 will be somewhat mid 2018.

    HBM has plenty of time to ramp up and we pretty much know ther ar elow cost HBM2 memory. Even Intel is using HBM2 in their deep learning thingy.

    problem with GDDR6 is that its too slow for hi end and too fast (and too costly) for low end.

    Everyone is talking about the price, but you all forget that GDDRX5 isnt cheap. I guess you assume it doesnt cost more than GDDR5, but GDDR5 has been around forever and Micron is the only one (and seems only one actually interested in GDDR)

    And if GDDR loses its only advantage - price - then theres 0 point of using it as HBM is superior in every way lol
Sign In or Register to comment.