Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!

AMD Vega GPUs will launch "over the next couple of months".

13567

Comments

  • CleffyCleffy Member RarePosts: 6,412
    edited July 2017
    Honestly for a desktop GPU, I only care if its more than 375w. There is just a point where it would be difficult to manufacture a GPU chip that needs more than that. Also I got solar panels on my house so I am paying nothing in electricity aside from the cost already paid on the panels.
    [Deleted User]Ridelynn
  • AmazingAveryAmazingAvery Age of Conan AdvocateMember UncommonPosts: 7,188
    edited July 2017
    Ridelynn said:
    Umm, the GTX 1080 has a TDP of 180W at stock clocks.

    And from the supposed leak you have copied and pasted here, Vega XT is 220W, which isn't all that different.

    Sure, 300W > 120W, but nothing we are talking about here hits those numbers you just pulled out of thin air.

    ASIC being different to the TDP; ASIC power is the power draw of the GPU itself (the actual chip) excluding memory and such. Polaris 10 has an ASIC power of 110 watts but the memory system and other tidbits on the PCB is what pushes the card up to its 150 watts. 

    The AMD Radeon RX Vega XT is also said to offer 64 shader clusters (4,096 stream processors) and 8GB of HBM2 memory albeit at just 220W of ASIC power (285W of board power). As such, it’ll be cooled by a traditional air-based solution.

    RX VEGA XTX ASIC = 300W
    RX VEGA XT ASIC = 220W
    RX VEGA XL ASIC = 220W
    1080 ASIC = 120-130W 
    Polaris 10 ASIC = 110W

    Even the VEGA XTX will have a TDP of 300W compared to a 1080 at 180W and suppose to be only slightly better. That is not good at all.

    Can you see where this is going with the perf/W because it's quite clear.
    VEGA is not going to be as efficient as Pascal. It is a year late. It is suppose to be better than a 1080 at the high end and if it is it will be over 2x more power to do so.

    Even if the Vega line consumes more power than Nvidia's cards, it probably won't matter to most gamers, so long as performance is at least on par if not better. On the flip side, where AMD potentially opens itself up to criticism is if Vega doesn't match the performance of Pascal, and draws noticeably more power to boot.
    Post edited by AmazingAvery on
    Hrimnir



  • PhryPhry Member LegendaryPosts: 11,004
    Ridelynn said:
    Umm, the GTX 1080 has a TDP of 180W at stock clocks.

    And from the supposed leak you have copied and pasted here, Vega XT is 220W, which isn't all that different.

    Sure, 300W > 120W, but nothing we are talking about here hits those numbers you just pulled out of thin air.

    ASIC being different to the TDP; ASIC power is the power draw of the GPU itself (the actual chip) excluding memory and such. Polaris 10 has an ASIC power of 110 watts but the memory system and other tidbits on the PCB is what pushes the card up to its 150 watts. 

    The AMD Radeon RX Vega XT is also said to offer 64 shader clusters (4,096 stream processors) and 8GB of HBM2 memory albeit at just 220W of ASIC power (285W of board power). As such, it’ll be cooled by a traditional air-based solution.

    RX VEGA XTX ASIC = 300W
    RX VEGA XT ASIC = 220W
    RX VEGA XL ASIC = 220W
    1080 ASIC = 120-130W 
    Polaris 10 ASIC = 110W

    Even the VEGA XTX will have a TDP of 300W compared to a 1080 at 180W and suppose to be only slightly better. That is not good at all.

    Can you see where this is going with the perf/W because it's quite clear.
    VEGA is not going to be as efficient as Pascal. It is a year late. It is suppose to be better than a 1080 at the high end and if it is it will be over 2x more power to do so.

    Even if the Vega line consumes more power than Nvidia's cards, it probably won't matter to most gamers, so long as performance is at least on par if not better. On the flip side, where AMD potentially opens itself up to criticism is if Vega doesn't match the performance of Pascal, and draws noticeably more power to boot.
    And this video demonstrates what you get if you put 400W through the things, as soon as they can they are going to try and put 500W through and see what happens.
    https://www.youtube.com/watch?v=WbS7c2Een8o
    It does seem that Vega's limitations are mostly power based, though the lengths needed to get that power increase, are not insignificant. Gamers Nexus do some very informative work imo. :)
    AmazingAvery
  • RidelynnRidelynn Member EpicPosts: 7,383
    @AmazingAvery, Thanks for your explanation of ASIC power draw. Given that HBM is integrated on an interposer die with the GPU, it's not clear to me if that's just the GPU die power requirement, or that's the total die with HBM as well. Not necessarily trying to be argumentative, I have a lot of reservations about Vega and I'm hardly excited, I just see it as a more an interesting discussion. Frontier Edition does have a 300W TDP, after all, but everyone keeps saying not to compare the Compute card with the final Desktop gaming card, so... I don't know.

    Apart from the academic discussion of power draw, I do have to agree with @Cleffy - it's a desktop card, I don't really care if Card X is a bit more efficient than Card Y, it all depends on price/performance in a desktop. Sure, if a card is doing something ridiculous (like >350W) then I'll take notice (mostly because I don't want a space heater sitting under my desk in the summer), but apart from that, it's just a matter of cooling. I also have solar, but I still care very much about my total power use. A GPU, for the amount that I actually run it gaming, is an insignificant load looking at the overall annual use, even at 300-500W. Now, if I were mining, or running GPU DC programs or something, that would be a different story.

    [Deleted User]AmazingAvery
  • AmazingAveryAmazingAvery Age of Conan AdvocateMember UncommonPosts: 7,188
    @Ridelynn I agree mate. Like you, I wonder if HBM is part of those numbers or not. Back in the day I opted for a GTX 480 and that thing was a heat monster. It also spawned a ton of "efficiency" conversations from folks here. Over the years with things getting efficient I've always liked an upgrade path to potentially include 2 cards, but with a switch to higher wattage I think a few people may be put off based on what their PSU is capable of.
    I hope that Vega top end can compete or at least priced effectively. I can't hide I'm disappointed regarding the assumed efficiency drawback based on all the time they have had to compete in the top end. 

    As mentioned earlier, it seems Vega is having a similar dilemma like a few of the former AMD GPUs have been having. Getting the clock speeds up to desired levels requires far too much power. So I really do wonder (and the point I was getting to on the power) was that there doesn't seem much headroom.



  • QuizzicalQuizzical Member LegendaryPosts: 25,355
    How would you even measure the power consumption of separate components of a video card?  I expect that AMD knows, but how would anyone else try to measure it?  Maybe board partners would know about the power consumption of VRMs and the like, but it would be awfully hard for someone who merely has a GPU and wants to know.
  • AmazingAveryAmazingAvery Age of Conan AdvocateMember UncommonPosts: 7,188
    edited July 2017
    So if multiple websites like Toms are saying Vega consumer cards are limited to 8GB of RAM. No exceptions. This implies a few things about AMD’s current positioning and the state of HBM2. While the Vega Frontier Edition has 16GB of VRAM in two stacks of 8GB each, consumer Vegas would be returning to two stacks of 4GB. That’s still 4x the memory density per stack than what the Fury X fielded, but it’s not entirely what people expected. The benefit there is dropping from 16GB to 8GB should give AMD some breathing room on cost with HBM2 having some serious growing pains. 

    While Raja Koduri has said the consumer variants will be clocked higher than the Vega FE, AMD hasn’t historically gotten a great return here. For example: The RX 580 has a base clock 1.12x higher than the RX 480 and a boost clock 1.06x higher. The bad news is, it consumes 1.29x more power to deliver those benefits.

    If we assume that Vega’s clock speed and TDP scale the same way Polaris does then a 1.7GHz clock is probably pretty good assumption. If RX Vega scales better than Polaris, a 1.75GHz or even 1.8GHz clock doesn’t seem completely impossible — and an 1800MHz clock would give RX Vega a 12.5 percent higher boost clock than Vega FE. That I reckon is where Raja is coming from with his quote of saying consumer Vega will clock higher than Vega FE. There is this is though - that’s not going to be enough to match the GTX 1080 Ti, but it would give the RX Vega some narrow victories/losses or practical ties against the 1080.  Playing catch up; the revised NCU in Vega cards takes from Nvidia Pascal and can now scale its operations per clock cycle; and more operations can now be performed in each clock cycle when compared to older graphics cards.

    PC Perspective's tests show that AMD has lost it's superior scaling from 1080p to 4K and that Nvidia cards are now showing superior scaling characteristics from 1080p to 4K than AMD is. However, this should be viewed or looked at from a perspective of early drivers, not a gaming card, etc, etc. But it is entirely plausible too.

    That's find and dandy but it will be cheaper, it is a year late (GeForce GTX 1080 May 27, 2016 | GeForce GTX 1080 Ti March 10, 2017 | Vega - August 2017 - That's over a year later than 1080), it will use more power and there are scaling, cooling, heat and noise concerns. The one hope I have is that the Bios will come good but time is running out fast.

    I reckon the:
    Vega XL will slot between the 1060 & 1070
    Vega XT between 1070 & 1080
    Vega XTX (oc) will try and close the gap with the 1080 and approach the 1080TI in places (but only on the water cooled variant)

    It'll all be a paper launch in August with some partner lower spec only cards minimally following with rest coming much later. Apple had common sense as usual and announced their new laptops with RX Vega for December.



  • QuizzicalQuizzical Member LegendaryPosts: 25,355
    So if multiple websites like Toms are saying Vega consumer cards are limited to 8GB of RAM. No exceptions. This implies a few things about AMD’s current positioning and the state of HBM2. While the Vega Frontier Edition has 16GB of VRAM in two stacks of 8GB each, consumer Vegas would be returning to two stacks of 4GB. That’s still 4x the memory density per stack than what the Fury X fielded, but it’s not entirely what people expected. The benefit there is dropping from 16GB to 8GB should give AMD some breathing room on cost with HBM2 having some serious growing pains. 
    AMD and Nvidia pretty much never allow board partners to attach the maximum amount of memory physically possible on consumer desktop cards except to the lowest end junk.  Larger memory capacity is reserved for professional cards.  So that's about as surprising as saying that the consumer card won't support ECC memory.
  • ianicusianicus Member UncommonPosts: 665
    wait, we are talking about AMD gpu's? *turns around and walks back out the door*......*vomits*
    "Well let me just quote the late-great Colonel Sanders, who said…’I’m too drunk to taste this chicken." - Ricky Bobby
  • AmazingAveryAmazingAvery Age of Conan AdvocateMember UncommonPosts: 7,188
    Quizzical said:
    So if multiple websites like Toms are saying Vega consumer cards are limited to 8GB of RAM. No exceptions. This implies a few things about AMD’s current positioning and the state of HBM2. While the Vega Frontier Edition has 16GB of VRAM in two stacks of 8GB each, consumer Vegas would be returning to two stacks of 4GB. That’s still 4x the memory density per stack than what the Fury X fielded, but it’s not entirely what people expected. The benefit there is dropping from 16GB to 8GB should give AMD some breathing room on cost with HBM2 having some serious growing pains. 
    AMD and Nvidia pretty much never allow board partners to attach the maximum amount of memory physically possible on consumer desktop cards except to the lowest end junk.  Larger memory capacity is reserved for professional cards.  So that's about as surprising as saying that the consumer card won't support ECC memory.
    What about AMD themselves? AMD consumer cards limited to 8GB of RAM only is what a bunch of places are saying. The expectation was Vega will compete in high end again and along with that there was lots of expectation that it will have a 16GB consumer flavour from AMD themselves.



  • QuizzicalQuizzical Member LegendaryPosts: 25,355
    edited July 2017
    Quizzical said:
    So if multiple websites like Toms are saying Vega consumer cards are limited to 8GB of RAM. No exceptions. This implies a few things about AMD’s current positioning and the state of HBM2. While the Vega Frontier Edition has 16GB of VRAM in two stacks of 8GB each, consumer Vegas would be returning to two stacks of 4GB. That’s still 4x the memory density per stack than what the Fury X fielded, but it’s not entirely what people expected. The benefit there is dropping from 16GB to 8GB should give AMD some breathing room on cost with HBM2 having some serious growing pains. 
    AMD and Nvidia pretty much never allow board partners to attach the maximum amount of memory physically possible on consumer desktop cards except to the lowest end junk.  Larger memory capacity is reserved for professional cards.  So that's about as surprising as saying that the consumer card won't support ECC memory.
    What about AMD themselves? AMD consumer cards limited to 8GB of RAM only is what a bunch of places are saying. The expectation was Vega will compete in high end again and along with that there was lots of expectation that it will have a 16GB consumer flavour from AMD themselves.
    AMD almost certainly won't allow a 16 GB consumer Vega 10 card unless the GPU is around for long enough that it's possible to get 16 GB HBM2 stacks, which would allow for a 32 GB card.  It surprises me that they're even willing to offer a 16 GB Vega 10 card for only $1000, as usually the high memory professional cards cost a lot more than that.

    For example, there's no physical reason why Nvidia couldn't make a GeForce GTX Titan Xp with 24 GB of memory.  Here's a 24 GB card based on the same GPU:

    https://www.amazon.com/PNY-NVIDIA-Quadro-VCQP6000-PB-384-bit/dp/B01MRD365C/ref=sr_1_1?ie=UTF8&qid=1500250257&sr=8-1

    Or a GeForce GTX 1080 with 16 GB of memory.  Again, a professional card:

    https://www.amazon.com/PNY-Quadro-VCQP5000-PB-256-bit-EXPRESS/dp/B01N6W4CVB/ref=sr_1_1?ie=UTF8&qid=1500250300&sr=8-1

    Or a GeForce GTX 980 Ti with 24 GB.  Another professional card:

    https://www.amazon.com/PNY-Video-VCQM6000-24GB-PB-Quadro-PCIE3-0/dp/B01DPPPCM2/ref=sr_1_2?ie=UTF8&qid=1500250257&sr=8-2

    Likewise, AMD could have built a 32 GB version of a Radeon R9 390X.  Again, a professional version:

    https://www.amazon.com/AMD-FirePro-S9170-Graphics-100-505982/dp/B01JIILTOK?SubscriptionId=AKIAILSHYYTFIVPWUY6Q

    Admittedly, the 32 GB card came later, but they built 16 GB cards based on the same GPU shortly after the first Radeon cards based on it launched.  Yet they persistently restricted the consumer cards to at most 8 GB.

    So why don't they build consumer GPUs with more memory?  Look what those cards cost.  It's not the memory itself that is so expensive; the PlayStation 4 used what at the time were the largest capacity GDDR5 chips in existence and as many of them as were physically possible given the memory controller.  AMD and Nvidia want you to have to pay extra if you need huge amounts of memory.

    The only exception that comes to mind is AMD's Fiji GPU, where there was only one size possible of HBM, at 1 GB per stack.  And that only allowed 4 GB for the entire card.  If HBM had offered 2 GB and 4 GB stacks, they probably would have restricted the consumer cards to 2 GB per stack for an 8 GB card in total, and reserved the 4 GB stacks for professional cards.

    The Vega Frontier Edition is, if anything, a departure from this longstanding approach, as it makes the maximum memory capacity not that much more expensive than consumer cards.  If you think $1000 is outlandish, click on my links above and see what the real expensive cards cost.  And then consider that none of those are even the high end.  The top of the line is this:

    http://www.compsource.com/ttechnote.asp?part_no=VCQGP100PB&vid=356&src=F

    And that's the cheapest that I could find it.  Nvidia is so determined to keep people from buying a cheap consumer version of that GPU that they don't even offer a GeForce card based on it.
  • RidelynnRidelynn Member EpicPosts: 7,383
    What would you do with all that VRAM today anyway?

    Sure, more is better, but unless your running 4K with rediculous levels of AA and super high resolution texture packs, I can't think of any way to even touch more than 8G with today's titles - and even then, the GPUs themselves don't have enough power to push that many pixels anyway.

    You could find a ~select few~ cases where the 4G of HBM1 handicapped a Fury, but there were only a select few.

    There's this as well, which you can take or leave as you wish, as it's hardly comrehensive, but more or less mirrors my thoughts on the matter:
    http://www.tweaktown.com/tweakipedia/89/much-vram-need-1080p-1440p-4k/index.html
    [Deleted User]
  • OzmodanOzmodan Member EpicPosts: 9,726
    Quizzical said:
    So if multiple websites like Toms are saying Vega consumer cards are limited to 8GB of RAM. No exceptions. This implies a few things about AMD’s current positioning and the state of HBM2. While the Vega Frontier Edition has 16GB of VRAM in two stacks of 8GB each, consumer Vegas would be returning to two stacks of 4GB. That’s still 4x the memory density per stack than what the Fury X fielded, but it’s not entirely what people expected. The benefit there is dropping from 16GB to 8GB should give AMD some breathing room on cost with HBM2 having some serious growing pains. 
    AMD and Nvidia pretty much never allow board partners to attach the maximum amount of memory physically possible on consumer desktop cards except to the lowest end junk.  Larger memory capacity is reserved for professional cards.  So that's about as surprising as saying that the consumer card won't support ECC memory.
    What about AMD themselves? AMD consumer cards limited to 8GB of RAM only is what a bunch of places are saying. The expectation was Vega will compete in high end again and along with that there was lots of expectation that it will have a 16GB consumer flavour from AMD themselves.
    Come on, the cards are not out yet, what you are ranting about is all supposition.  Let's wait for the cards to come out first.  Want is the big deal about 8gb memory limit?  1080's only come with that and that is still a damn good card.  Take a chill until we actually see some reviews.

    As to the 1060's, I really doubt many data miners are buying them, their memory setup just does not make them a very good candidate for such.  

  • laseritlaserit Member LegendaryPosts: 7,591
    Ridelynn said:
    What would you do with all that VRAM today anyway?

    Sure, more is better, but unless your running 4K with rediculous levels of AA and super high resolution texture packs, I can't think of any way to even touch more than 8G with today's titles - and even then, the GPUs themselves don't have enough power to push that many pixels anyway.

    You could find a ~select few~ cases where the 4G of HBM1 handicapped a Fury, but there were only a select few.

    There's this as well, which you can take or leave as you wish, as it's hardly comrehensive, but more or less mirrors my thoughts on the matter:
    http://www.tweaktown.com/tweakipedia/89/much-vram-need-1080p-1440p-4k/index.html
    8k is here and hopefully we won't need to use AA with it.

    I'm averaging 8g of vram with Prepar3d v4 @ 4k and that will go up.
    AmazingAvery

    "Be water my friend" - Bruce Lee

  • RidelynnRidelynn Member EpicPosts: 7,383
    laserit said:
    Ridelynn said:
    What would you do with all that VRAM today anyway?

    Sure, more is better, but unless your running 4K with rediculous levels of AA and super high resolution texture packs, I can't think of any way to even touch more than 8G with today's titles - and even then, the GPUs themselves don't have enough power to push that many pixels anyway.

    You could find a ~select few~ cases where the 4G of HBM1 handicapped a Fury, but there were only a select few.

    There's this as well, which you can take or leave as you wish, as it's hardly comrehensive, but more or less mirrors my thoughts on the matter:
    http://www.tweaktown.com/tweakipedia/89/much-vram-need-1080p-1440p-4k/index.html
    8k is here and hopefully we won't need to use AA with it.

    I'm averaging 8g of vram with Prepar3d v4 @ 4k and that will go up.
    You are most definitely an outlier and not a typical gamer though. Not everyone runs Lockheed Martin software. 

    The typical gamer still hasn't moved from 1080, and won't for quite some time. We don't have anything that can drive 8K yet with a decent level of performance, and can just barely manage 4k with current technology. So why should AMD be faulted for not future-proofing a GPU today for 8K if no GPU from any manufacturer is  going to be able to satisfactorily drive it for the next few years?
    [Deleted User]
  • laseritlaserit Member LegendaryPosts: 7,591
    Ridelynn said:
    laserit said:
    Ridelynn said:
    What would you do with all that VRAM today anyway?

    Sure, more is better, but unless your running 4K with rediculous levels of AA and super high resolution texture packs, I can't think of any way to even touch more than 8G with today's titles - and even then, the GPUs themselves don't have enough power to push that many pixels anyway.

    You could find a ~select few~ cases where the 4G of HBM1 handicapped a Fury, but there were only a select few.

    There's this as well, which you can take or leave as you wish, as it's hardly comrehensive, but more or less mirrors my thoughts on the matter:
    http://www.tweaktown.com/tweakipedia/89/much-vram-need-1080p-1440p-4k/index.html
    8k is here and hopefully we won't need to use AA with it.

    I'm averaging 8g of vram with Prepar3d v4 @ 4k and that will go up.
    You are most definitely an outlier and not a typical gamer though. Not everyone runs Lockheed Martin software. 

    The typical gamer still hasn't moved from 1080, and won't for quite some time. We don't have anything that can drive 8K yet with a decent level of performance, and can just barely manage 4k with current technology. So why should AMD be faulted for not future-proofing a GPU today for 8K if no GPU from any manufacturer is  going to be able to satisfactorily drive it for the next few years?
    Going forward, pretty much any budget GPU is going to run 1080p at more than acceptable levels. Buying a mid grade (gtx1070 or equivalent) gpu to run @1080p is overkill and a waste of money.

    Running GTA V @2k on my 4770k/GTX 1080 @ very high (not quite maxed) settings, I'm pushing 4g+ Vram.

    If we're talking about moving forward... @4k you'll want a minimum of 8g Vram if your planning on running on high settings @ high frames, 10g-12g for extreme settings.  I can easily see 16g being minimal for 8k.

    A new 1080p TV is the budget model and a 4k TV is now the norm. I bought a 46" 4k Samsung for $500 Canadian.

    Its just evolution.

    "Be water my friend" - Bruce Lee

  • RidelynnRidelynn Member EpicPosts: 7,383
    I agree about 1080 - just about anything discrete will run 1080 at some decent level of fidelity.

    So, just for instance, if Vega is targeting 4K, and 8G of RAM is probably enough for 4K at the fidelity levels that Vega will have the performance to drive, then why are some people lambasting it for only ~possibly~ including 8G of RAM? We know that Vega has no chance in hell of running anything at 8K - not even nVidia's best today can do that, and unless Vega is more than 4X the performance of the 1080Ti, it won't have much of a chance of running it either.

    8K Monitors may be just around the corner, but GPUs still have a good while to get there. Heck, GPUS still haven't really caught up to 4K, even at the very high end.

    That's my point - why lambast Vega for "not enough memory" when, even at 8G, it appears to have enough memory? Let's not forget it's HBM, so supposedly it's much higher bandwidth than the GDDR5X that nVidia is using. I know AMD was claiming on the Fury X that it's 4G of HBM1 was roughly equivalent to having 8G of GDDR5, I don't know that I necessarily believe that to that magnitude, because replacing storage with bandwidth isn't an exact 1:1 tradeoff, but I will give them some leeway on it - the Fury X was very rarely bottlenecked by VRAM even though it only had 4G.

    Besides, there are plenty of other legitimate reasons to lambast Vega. We can very legitimately berate AMD for it being so late to market. We can conjecture about how it's performance will stack up based on preliminary data from Frontier Edition. We can discuss our opinions on if drivers and hardware changes will make a significant different on the consumer edition of the card. We can speculate about pricing and how that will affect adoption. VRAM as a major concern though, really, compared to all those other factors, is just a really weak straw man.
    laserit[Deleted User]
  • laseritlaserit Member LegendaryPosts: 7,591
    It would be nice if AMD can at least match Nvidia and keep the ball rolling. My gtx1080 ran 4k adequately,  my gtx1080ti runs 4k very nicely @ very high settings. 
    AmazingAvery

    "Be water my friend" - Bruce Lee

  • RidelynnRidelynn Member EpicPosts: 7,383
    edited July 2017
    I'd also like to point out, in the spirit of the Original Post - 2 months to have a Vega product in Professional Graphics, Consumer Graphics, and Compute lines. From May 22.

    Now, we aren't two months to the date of the post yet - but we are getting pretty darn close by now.

    And we only have Frontier which is ... I get confused - I think it's the compute card, but have to say I'm not 100% certain on that. If it is, then we still haven't seen Consumer or Professional Graphics models come out. And still only have a handful of suspect leaked information and the performance of a released Compute card to base anything from.

    *edit* Just to be clear, I'm not throwing Quizzical under the bus for his timeline, but rather AMD, because that's where the timeline is coming from. They haven't blown through it yet, but the clock is ticking and we don't have a lot of time left for a 2Q17 release window.
  • QuizzicalQuizzical Member LegendaryPosts: 25,355
    edited July 2017
    AMD has said that the Radeon RX Vega will launch at SIGGRAPH at the end of July.  So no, it's not going to be inside of two months from when the original comment was said.

    I'm not really sure what to make of the Frontier Edition card.  It's not really the compute card, as the form factor won't play nicely with servers and doesn't really seem intended to let you put more than about two in the same rig.  The liquid-cooled version is egregiously bad for servers.  It's not really the professional graphics card, as it doesn't have the certified drivers for particular applications and those cards tend to cost a lot more.  And it's too expensive to be a consumer card.
  • AmazingAveryAmazingAvery Age of Conan AdvocateMember UncommonPosts: 7,188
    edited July 2017
    laserit said:
    It would be nice if AMD can at least match Nvidia and keep the ball rolling. My gtx1080 ran 4k adequately,  my gtx1080ti runs 4k very nicely @ very high settings. 
    I agree.

    When I buy a GPU I expect it to be good for a few years. 

    My considerations/thoughts in terms of VRAM are all normal questions over the next few years:

    • 4K Textures will become standard - not basterdized 1080p or 1440p at a 4K or more res
    • Developers making games for the new consoles are all targeting the 8GB VRAM there (full potential) so potential for PC ports to have higher quality artwork, larger textures, more objects, fancier effects, etc. GTA V uses more than 4GB of VRAM at 1080p, with everything maxed today.
    • It is not resolution what eats memory but textures. You can use high quality textures no matter resolution. Texture quality has been ramped up in most games in 2017, compared to 2015.
    • 4K monitors supporting high HZ are coming to market more this year and into next with price dropping multi 4K monitor set ups are not to far out of reach over the next while
    • 4K TV adoption is fast growing (a faster rate than HDTV - fact)
    • Streaming assist - NVIDIA & Netflix Now Previewing 4K Streaming - 4K Netflix support should fully work on all Pascal GPUs with 3GB or more of VRAM (think short term future from now and what the needs will be)
    • SLI and Crossfire does not double, triple or quadruple your VRAM, only the processing power is affected, and this is entirely dependent on the developers.  
    • For today, personally, I'd consider a card with 6GB VRAM or more, especially if you’re the type who likes to download game mods and/or high-resolution texture packs, which are sometimes specifically created to deliver a greater level of in-game detail for high-end cards that have extra memory capacity. - This is today, I'm thinking about tomorrow

    Quote - But AMD isn't just calling this HBM or VRAM; it's now a "High-Bandwidth Cache" (HBC) and there's also a new "High-Bandwidth Cache Controller" (HBCC). The distinction is important, because the HBCC plays a much more prominent role in memory accesses. AMD calls this a "completely new memory hierarchy." That's probably a bit of hyperbole, but the idea is to better enable the GPU to work with large data sets, which is becoming an increasingly difficult problem. 
    As an example of why the HBCC is important, AMD profiled VRAM use for The Witcher 3 and Fallout 4. In both cases, the amount of VRAM allocated is around 2-3 times larger than the amount of VRAM actually 'touched' (accessed) during gameplay. The HBCC takes this into account, allowing the GPU to potentially work with significantly larger data sets, providing a 512TB virtual address space.
    AMD also demonstrated a real-time physically rendered image of a house using more than 600GB of data, running on what I assume is an 8GB Vega card. If the HBCC works as AMD claims, even a 8GB card could potentially behave more like an 16-24GB VRAM card, while a 16GB card would equal a 32-48GB card.

    • Can get a GTX 1080Ti with 11GB VRAM that is today 
    • HBCC will reduce VRAM allocation it is a software solution to less VRAM hardware (why develop it ?) 
    • Which is easier for a dev 1) use the VRAM that is there in normal operations or 2) extra coding for HBCC?
    I guess the fruits of HBCC won't be seen for a while and that it is taken on faith.
    Quizzicallaserit



  • QuizzicalQuizzical Member LegendaryPosts: 25,355
    It's not possible to make 8 GB of memory magically function as 16 GB.  Some things are compressible, but GPUs are already doing that.

    If you're constantly having to copy data back and forth over PCI Express because the game engine thinks you have 16 GB, but only 8 GB is in video memory and the other 8 GB is in system memory, you can easily turn that into a huge PCI Express bottleneck.  From the perspective of a developer, I'd want to at minimum be notified when it happens, and probably also have the option to disable it.

    Unless, of course, you're not actually copying that data over PCI Express.  APUs, anyone?  Right now, the CPU and GPU generally share a memory pool so there's no real difference between CPU memory and GPU memory, and this means not enough GPU memory bandwidth.  But how about an APU with HBM on package?
    AmazingAvery
  • CleffyCleffy Member RarePosts: 6,412
    edited July 2017
    I totally managed to over-saturate the Fury X's vram. When Bethesda released the High-Res texture pack for Fallout 4, it was micro-stuttering a bit. But that is with 8GB recommended VRAM,poorly optimized shaders and textures from Bethesda, and 4k resolution.
    [Deleted User]
  • laseritlaserit Member LegendaryPosts: 7,591
    Torval said:
    laserit said:
    Ridelynn said:
    laserit said:
    Ridelynn said:
    What would you do with all that VRAM today anyway?

    Sure, more is better, but unless your running 4K with rediculous levels of AA and super high resolution texture packs, I can't think of any way to even touch more than 8G with today's titles - and even then, the GPUs themselves don't have enough power to push that many pixels anyway.

    You could find a ~select few~ cases where the 4G of HBM1 handicapped a Fury, but there were only a select few.

    There's this as well, which you can take or leave as you wish, as it's hardly comrehensive, but more or less mirrors my thoughts on the matter:
    http://www.tweaktown.com/tweakipedia/89/much-vram-need-1080p-1440p-4k/index.html
    8k is here and hopefully we won't need to use AA with it.

    I'm averaging 8g of vram with Prepar3d v4 @ 4k and that will go up.
    You are most definitely an outlier and not a typical gamer though. Not everyone runs Lockheed Martin software. 

    The typical gamer still hasn't moved from 1080, and won't for quite some time. We don't have anything that can drive 8K yet with a decent level of performance, and can just barely manage 4k with current technology. So why should AMD be faulted for not future-proofing a GPU today for 8K if no GPU from any manufacturer is  going to be able to satisfactorily drive it for the next few years?
    Going forward, pretty much any budget GPU is going to run 1080p at more than acceptable levels. Buying a mid grade (gtx1070 or equivalent) gpu to run @1080p is overkill and a waste of money.

    Running GTA V @2k on my 4770k/GTX 1080 @ very high (not quite maxed) settings, I'm pushing 4g+ Vram.

    If we're talking about moving forward... @4k you'll want a minimum of 8g Vram if your planning on running on high settings @ high frames, 10g-12g for extreme settings.  I can easily see 16g being minimal for 8k.

    A new 1080p TV is the budget model and a 4k TV is now the norm. I bought a 46" 4k Samsung for $500 Canadian.

    Its just evolution.
    I don't agree about the 1070 being overkill. I have a 970 and there are some things it can't handle at 1080 and give me a consistent 60 frames (Shadow of Mordor as an example). But most MMOs I can crank up and do fine.

    I agree that 1080p is standard mid-range now and low high end depending on the monitor, but I think there is still a place for the mid-high range cards in there. Also as hardware keeps improving that same performance can be delivered for less power, heat, and cost.

    4K in TV is the norm. That's what we'll be buying for our next TV. I've been looking at 50" 4K TVs which will be a huge step up from our 32" 720p Phillips. :lol: But 4K video playback is a lot different and less taxing than 4K being rendered dynamically in a game.

    In my opinion 8K is just arriving and I think it will take a few years before that is standard. There are other considerations than graphics hardware. There is bandwidth to consider which I think is one of the looming issues in the tech industry. All of our data is getting fatter and needing faster throughput. Throughput and storage access are
    I've never owned a 970 or a 1070, but a 4770k/gtx1080 will run GTA V @80+ frames consistently @2k. The lowest frame on the benchmark was 58 and that was only for a split second, settings pretty much maxed.

    I upgraded a Titan X (non pascal) to the gtx1080 on that system and the difference in performance was very significant. I'd imagine the difference between a 970 and a 1070 would be pretty big.
    [Deleted User]

    "Be water my friend" - Bruce Lee

  • AmazingAveryAmazingAvery Age of Conan AdvocateMember UncommonPosts: 7,188
    edited July 2017
    Small video from Budapest event today, click for good subtitles.

    Edit: Article too on subject: https://www.pcinvasion.com/amd-vega-budapest-event-provides-answers-performance






Sign In or Register to comment.