Quantcast

Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!

Rumor: Nvidia’s Pascal Architecture Is In Trouble With Asynchronous Compute

blueturtle13blueturtle13 Member LegendaryPosts: 12,363

According to a report published by Bitsandchips.it, Nvidia’s upcoming Pascal architecture will not be significantly better at Async Compute than its predecessor (Maxwell). Needless to say, this one is 100% a rumor and should be taken with a grain of salt. Pascal architecture will be landing sometime later this year and improves upon Maxwell with far better FP64 support amongst a plethora of other things. The report further mentions that Nvidia is hoping to win this round by raw performance numbers alone.

Nvidia Pascal GTX 1080 GTX 1070 GTX 1060 GPUs WCCFtech

Pascal architecture allegedly facing difficulty with asynchronous compute

Asynchronous Compute has been a deal sweetener for Radeon buyers ever since the DirectX 12 API hit the stage. AMD is currently leading in all DirectX 12 benchmarks available on the market right now and this is solely due to the fact that its GPUs can process ASync concurrently. Interestingly, Nvidia GPUs perform much better without ASync turned on. This is probably due to the fact that Nvidia has disabled ASync from their driver suite due to the fact that its GPUs cannot process ASync concurrently on the hardware level, rather they need context switching which is expensive in terms of frame rate. Geforce GPUs instead rely on a technique called pre-emption. Which is why you see frame rates actually being unpredictable when ASync is forced.

Asynchronous Shaders

On the other hand, Maxwell remains the only architecture on the discrete GPU market right now which supports DirectX 12 Feature Level 12_1 (Radeon cards only extend up to Feature Level 12_0). This will allow existing Geforce cards to use advanced rendering techniques made available by Direct3D 12 that will not be available to AMD users. VXGI/VXAO and Hybrid Ray Traced Shadows are examples of such a feature. So on both sides of the fence, whether you are in the red camp or in the green camp, there is an upside and a down side. Pascal and Polaris however, were supposed to bridge this gap and unify full compatibility of DirectX 12. Before we let that train of thought run away however, there is one thing we must keep in mind, the entire point of features like ASync is to maximize the use of a GPU’s resources to allow the maximum possible performance.


One reason that we think that this rumor might be true is the fact that chip design isn’t something that happens overnight. In fact, it takes an architecture many years to go from the drawing board to hit the shelves. Asynchronous Compute was hyped and became a major point of interest in the last year, which is most definitely not enough time for Nvidia to do anything about it. If Asynchronous Compute wasn’t something that was a focus when the Pascal chips were initially designed – then there is nothing Nvidia can do about it at this late in the game. The report further mentions that Nvidia also recently pushed its entire GameWorks SDK into the public domain via GitHub which could be seen as a move to make sure that all games that utilize its technology are fully optimized to leverage Nvidai GPU capabilities (No ASync + Bad Game Optimization = Bad Combo).

What we do know for a fact is that Nvidia has been focusing on its FP64 or double precision performance – which was completely culled in Maxwell to offer superior value to gamers (which are mostly single precision). So whether or not lack of ASync support in Pascal translates into a real world loss depends on how the raw performance of Pascal’s Graphic Cards compares to ASync enabled Pascal graphic cards. Even if this report is true Nvidia could still win the next round by focusing on raw gaming performance of its Pascal GPUs (Techniques like ASync can only help utilize the maximum potential of all its resource, not actually add more power out of nowhere.)





거북이는 목을 내밀 때 안 움직입니다












«134

Comments

  • MalaboogaMalabooga Member UncommonPosts: 2,977
    edited March 2016
    This is the thing:

    When you need 20% more powerful hardware to compete because your own hardware is less efficient...youre in trouble. You can argue that NVidia has enough money to sell cards at a loss, but thats pretty bad for them.

    2nd thing is that if they bring back FP64 capable chips huge chunks of their power efficiency is gone (as main reason they used somewhat less power is they cut out portions of the chip, namely FP64 double precision part). Its also why they still use Kepler in some of their proffesional GPUs as Kepler isnt gimped chip.

    And i said this same thing almost a year ago, as soon as Stardock went public with info (inspite being threatened and blackmailed by NVidia who still claimed their cards can do async. In fact this could be another class action law suit against NVidia just like 3,5GB GTX970)
  • XyireXyire Member UncommonPosts: 152
    There is the point that dx12 isn't well adopted yet as its pretty new.  Looks like nvidia cards will suffer comparatively in dx12 games for now.  Of course, by the next gen card that presumably will have async capabilities, dx12 will be much more mainstream.  Not being the faster card on dx12 games for 1 card generation isn't going to hurt them - it will only matter if  it keeps up... especially if they are faster on dx11 this generation.

    I think most people don't buy nvidia cards cause they are the fastest cards anyway, they usually buy them because they trust the brand more.
  • TorvalTorval Member LegendaryPosts: 19,986
    AMD has an opportunity to take the lead in the market. They have already shown they are the better platform for 4k at this point. Once they get on a 14/16nm process node and HBM they could really kick their power hunger problem and shine.

    Here's to hoping because it will only push Nividia to get better. They've been in the pole position for too long and need a little incentive to try harder.
    Fedora - A modern, free, and open source Operating System. https://getfedora.org/

    traveller, interloper, anomaly, iteration


  • pingopingo Member UncommonPosts: 608
    As someone who has always been a Nvidia guy, this could be good news if ATi gets a massive push. We need competition and viable pricing war to give us benefits. Too many generation of radeon cards have been lacking behind.


    Now I just wish they would also do mad gains in the CPU space. If AMD chips could somehow find a power gap to force some competition on Intel, that would be amazing.
  • MalaboogaMalabooga Member UncommonPosts: 2,977
    edited March 2016
    Xyire said:
    There is the point that dx12 isn't well adopted yet as its pretty new.  Looks like nvidia cards will suffer comparatively in dx12 games for now.  Of course, by the next gen card that presumably will have async capabilities, dx12 will be much more mainstream.  Not being the faster card on dx12 games for 1 card generation isn't going to hurt them - it will only matter if  it keeps up... especially if they are faster on dx11 this generation.

    I think most people don't buy nvidia cards cause they are the fastest cards anyway, they usually buy them because they trust the brand more.
    Nvidia doesnt gain or lose anything with DX12, for now its same on dx11 and dx12. AMD gains up to 20%. However, NVidia actually loses performance with async compute because overhead of software emulation costs more than it gains.

    DX12 is gaining huge adoption, pretty much all AAA games i have heard of coming out in 2016 will be dx12. And MS wants it badly and MS>>>>NVidia. Also 50m consoles use API like DX12/Vulkan and async compute (thats why games work much better on consoles than on comparable PC hardware)

    Win10 is pretty much no.1 OS for gamers (surpassed win7 in march)

    http://store.steampowered.com/hwsurvey/directx/

    And Vulkan is the same as its based on Mantle and brings best parts (less CPU overhead and async compute) to the table - as DX12 does. NVidia wont have DX12 GPU until 2018. Thats a looooooooong time. AMD lost 20% of market in 1 generation of GPUs, basically in less than a year.

    People buy NVidia cards because of PR. Thats one area where NVidia is superior (and they have money to cheat). But that has also backfired on them like gimpworks. You just cannot push nonsensical stutff on people. Natural selection will filter out good stuff and discard the rest. Pretty much only gimpworks feature that has seen wider usage is HBAO. Rest has pretty much been discarded by devs when they saw how badly it performs on ANY hardware (khm *hairworks* khm) and they cannot do anything about it since it is black box provided by NVidia.

    What dev wants to see its game drag itself on 1080p on 600+$ GPU because of 1 tick? Its bad for their business.
  • RidelynnRidelynn Member EpicPosts: 7,060
    Honestly,

    I'm just not that worried about async compute or DX12. Nothing I play right now uses it. Once something does come along that I want to play that supports it, if it's important to me at the time, I'll take a look at what's on the market then.

    But right now, it just seems like DX12/Vulcan/Async/Mantle whatever has been "ready to go" for a long time now, since even before Win10. And a few titles support it, but not many. And of those, most are not all that well (cough* ark *cough).

    Don't get me wrong, I giggle a bit because it's one area where AMD beats the snot out of nVidia, and I'm a fan of the underdog. But it just isn't that important right now, especially since Pascal, for all intents and purposes, is still firmly in the "Vaporware" category.
  • Leon1eLeon1e Member UncommonPosts: 791
    edited March 2016
    Idk what those guys at wccf tech are smoking but nvidia actually supports less features in Dx12 than AMD. On part because AMD actually has experience with this kind of things (e.g. Mantle, which has HUGE weight in the Dx12 specification and vulkan is basically mantle 2.0) 

    Here's a table: http://i.imgur.com/aAqqZYo.png
  • H0urg1assH0urg1ass Member EpicPosts: 2,380
    I have a confession to make.  I'm a fanboy of a particular GPU manufacturer... whichever one is making the best card for the best price.
  • XyireXyire Member UncommonPosts: 152
    Malabooga said:
    Xyire said:
    There is the point that dx12 isn't well adopted yet as its pretty new.  Looks like nvidia cards will suffer comparatively in dx12 games for now.  Of course, by the next gen card that presumably will have async capabilities, dx12 will be much more mainstream.  Not being the faster card on dx12 games for 1 card generation isn't going to hurt them - it will only matter if  it keeps up... especially if they are faster on dx11 this generation.

    I think most people don't buy nvidia cards cause they are the fastest cards anyway, they usually buy them because they trust the brand more.
    Nvidia doesnt gain or lose anything with DX12, for now its same on dx11 and dx12. AMD gains up to 20%. However, NVidia actually loses performance with async compute because overhead of software emulation costs more than it gains.

    DX12 is gaining huge adoption, pretty much all AAA games i have heard of coming out in 2016 will be dx12. And MS wants it badly and MS>>>>NVidia. Also 50m consoles use API like DX12/Vulkan and async compute (thats why games work much better on consoles than on comparable PC hardware)

    Win10 is pretty much no.1 OS for gamers (surpassed win7 in march)

    http://store.steampowered.com/hwsurvey/directx/

    And Vulkan is the same as its based on Mantle and brings best parts (less CPU overhead and async compute) to the table - as DX12 does. NVidia wont have DX12 GPU until 2018. Thats a looooooooong time. AMD lost 20% of market in 1 generation of GPUs, basically in less than a year.

    People buy NVidia cards because of PR. Thats one area where NVidia is superior (and they have money to cheat). But that has also backfired on them like gimpworks. You just cannot push nonsensical stutff on people. Natural selection will filter out good stuff and discard the rest. Pretty much only gimpworks feature that has seen wider usage is HBAO. Rest has pretty much been discarded by devs when they saw how badly it performs on ANY hardware (khm *hairworks* khm) and they cannot do anything about it since it is black box provided by NVidia.

    What dev wants to see its game drag itself on 1080p on 600+$ GPU because of 1 tick? Its bad for their business.
    While I agree dx12 will get adopted widely, currently few games support it.  All new games may adopt dx12, but the vast majority of games out there will still be dx11 as most older games will slowly convert or choose not to convert at all (depending on the style of game).

    Transitions take time, dx12 won't be in the majority of games for a while.  I just don't think the average gamer buying a card will be affected enough by comparative slowness that's limited to dx12 to really put a dent in nvidia market share.  

    As for consoles, unless a new console is coming out soon, I doubt there will be hardware changes associated with this. So consoles won't affect market shares for a while.

    Overall I think that depending on what nvidia does in the future, this may not affect them much at all.  Of course if they're too slow to get a competitive async card up and running they could tank horridly, but I doubt that's likely.
  • MalaboogaMalabooga Member UncommonPosts: 2,977
    edited March 2016
    H0urg1ass said:
    I have a confession to make.  I'm a fanboy of a particular GPU manufacturer... whichever one is making the best card for the best price.
    Thats OK if youre objective about it and dont move goalposts on every turn.

    For instance:

    Power efficiency was "most important GPU feature EVER"

    now when you have to heavily OC NVidias GPUs its other way around (exterme case 980ti and Titan X/Fury X) when you have to put huge OC/waterblocks on NVidias GPUs (which will add hefty premium cost AND make them draw 400-500W) to match Fury "its all about performance and who cares about power draw, only peasants care about it"

    Yah, it would be nice to have majority on objective side and with real information but its rarely so :)

    And i dont think people realize what it really means. How will NVidia fare if they have to sell 980 at price of 390 and 970 at price of 380?
  • XyireXyire Member UncommonPosts: 152
    edited March 2016
    And i dont think people realize what it really means. How will NVidia fare if they have to sell 980 at price of 390 and 970 at price of 380?
    I'm pretty sure that's just not happening.  Nvidia will just quote dx11 performance and continue business as usual.  As you said previously, their marketing is strong.
  • HrimnirHrimnir Member RarePosts: 2,413
    edited March 2016
    As usual a bunch of people hand wringing and waving their arms about over what is basically nothing.

    This article BTW is over 6 months old:

    http://www.extremetech.com/extreme/213519-asynchronous-shading-amd-nvidia-and-dx12-what-we-know-so-far

    From about half an hour of cursory research, this while thing comes from Ashes of Singularity, which even the freaking developer of the game admitted isn't the best test for this, on pre release drivers, etc.

    I'll worry about it when there is more than one DX12 game that can verify this as being true.  Until then, it's meaningless tripe.

    "The surest way to corrupt a youth is to instruct him to hold in higher esteem those who think alike than those who think differently."

    - Friedrich Nietzsche

  • XyireXyire Member UncommonPosts: 152
    Malabooga said:
    H0urg1ass said:
    I have a confession to make.  I'm a fanboy of a particular GPU manufacturer... whichever one is making the best card for the best price.
    Thats OK if youre objective about it and dont move goalposts on every turn.
    I think part of the problem here is that what people value in a card actually differs.  You may be hearing different stories from different people.  For instance, Amd cards tend to be faster and cheaper than nvidia.  They also tend to have more driver problems.  

    If you value fast and cheap, AMD is the best card out there.  If you value decent performance with no driver issues for more money, Nvidia's cards would be the best out there.

    Obviously in your example, the first person valued power consumption while the second total max speed.  I find it unlikely that those 2 statements came from a single individual.

    Disclaimer: my beliefs about amd being faster and cheaper with more driver problems come from my personal experience with amd and nvidia cards.
  • MalaboogaMalabooga Member UncommonPosts: 2,977
    edited March 2016
    Lol, nope, you see, both have driver problems. Thats part of PR BS thats floating around. Ive actually started personal list of games i own and play with NVidia driver related issues on my local forum because im fed up listening about "super drivers". Yeah, no lol. If you dont want driver issues buy a console.

    And actually one guy on this site said he had changed over to Fermi because his ATI card was running hot.

    In my example its same group of people. Even on this site few people still say that. Especially funny when people talk about power efficiency and then link benchmarks with heavily OCed cards.
  • MalaboogaMalabooga Member UncommonPosts: 2,977
    edited March 2016
    Hrimnir said:
    As usual a bunch of people hand wringing and waving their arms about over what is basically nothing.

    This article BTW is over 6 months old:

    http://www.extremetech.com/extreme/213519-asynchronous-shading-amd-nvidia-and-dx12-what-we-know-so-far

    From about half an hour of cursory research, this while thing comes from Ashes of Singularity, which even the freaking developer of the game admitted isn't the best test for this, on pre release drivers, etc.

    I'll worry about it when there is more than one DX12 game that can verify this as being true.  Until then, it's meaningless tripe.
    And then 8 months later, game releases in 1 week and NVidia has already released game ready driver:

    https://www.guru3d.com/articles_pages/ashes_of_singularity_directx_12_benchmark_ii_review,7.html

    Yep, NVidia has BSed a lot back then, 8 months and.....nothing, DX12 games releasing.

    Have you been living under a rock?

    http://www.pcgameshardware.de/Hitman-Spiel-6333/Specials/DirectX-12-Benchmark-Test-1188758/
  • XyireXyire Member UncommonPosts: 152
    Malabooga said:
    Lol, nope, you see, both have driver problems. Thats part of PR BS thats floating around. Ive actually started personal list of games i own and play with NVidia driver related issues on my local forum because im fed up listening about "super drivers". Yeah, no lol. If you dont want driver issues buy a console.

    And actually one guy on this site said he had changed over to Fermi because his ATI card was running hot.

    In my example its same group of people. Even on this site few people still say that. Especially funny when people talk about power efficiency and then link benchmarks with heavily OCed cards.
    As I said, my opinions are entirely based on my personal experience so your cry of pr bs holds no weight.  In the 7 Nvidia cards my friends and I have owned never once had a driver problem.  In the 5 AMD cards my friends and I have owned, I've had driver or over heating (caused by a driver update in 1 case) problems with EVERY card.  

    Maybe my experience is not a representative sample, but my opinions are backed with first hand data.  Trying to say that this is PR BS is actually pushing a pro AMD agenda.  It's ok when you do it? I think everyone should stick to facts and not say dissenters are wrong because of their personal preferences.

    I think I was very fair to both parties in my statements.  
  • MalaboogaMalabooga Member UncommonPosts: 2,977
    edited March 2016
    Pushing pro AMD agenda? rofl I have NVidia GPU currently.

    in 7 nvidia cards you didnt have a single problem? Riiiiiiiiiiiiiiiiiiiiiiiiiiiiight.

    You werent fair. Just go to nvidias forums to see how their drivers fare. Thats FACTS. Your personal preferences arent built on facts.

    Just the latest:

    http://www.theinquirer.net/inquirer/news/2450200/nvidia-geforce-36447-gpu-driver-is-borking-gamers-pcs

    http://techfrag.com/2016/03/09/nvidia-364-51-drivers-didnt-root-out-all-problems-crashes-and-display-glitches-persist/

    and "recommeded rollback to 362.00 as stable"

    facts

    On top of that pile is irony that "game ready drivers" didnt do squat for performance in "game ready" games they listed so it was a pointless upgrade in itself as well as pointless driver in itself, driver release for sake of releasing drivers, pure PR.
  • DahkohtDahkoht Member UncommonPosts: 479
    "This is the year AMD takes it to Nvidia"

    Said every year for a long time...... 

    Yet ones stock is worth over 30 , while the other can't break 3




  • QuizzicalQuizzical Member LegendaryPosts: 22,115
    Dahkoht said:
    "This is the year AMD takes it to Nvidia"

    Said every year for a long time...... 

    Yet ones stock is worth over 30 , while the other can't break 3




    AMD's financial troubles are due to Bulldozer being a disaster and then its successors not being competitive.  If Zen is great, AMD will be fine, no matter how bad Polaris is.  And if Zen is a Bulldozer-scale disaster, AMD will be headed for bankruptcy even if Polaris is massively better than than Pascal in every way you can think of and most of the ones you can't.

    Nvidia having a clear advantage over AMD in discrete GPUs is a recent phenomenon, really starting with Maxwell about a year and a half ago in desktops, and a little before that in laptops.  We're not that far removed from AMD being clearly ahead for a few years, starting at least by the launch of Cypress in 2009 if not RV770 in 2008, and not ending until Kepler arrived in 2012.  Every generation is a new chance to compete, and both Nvidia and AMD/ATI have won quite a few over the years.
  • XyireXyire Member UncommonPosts: 152
    Malabooga said:
    Pushing pro AMD agenda? rofl I have NVidia GPU currently.

    in 7 nvidia cards you didnt have a single problem? Riiiiiiiiiiiiiiiiiiiiiiiiiiiiight.

    You werent fair. Just go to nvidias forums to see how their drivers fare. Thats FACTS. Your personal preferences arent built on facts.

    Just the latest:

    http://www.theinquirer.net/inquirer/news/2450200/nvidia-geforce-36447-gpu-driver-is-borking-gamers-pcs

    http://techfrag.com/2016/03/09/nvidia-364-51-drivers-didnt-root-out-all-problems-crashes-and-display-glitches-persist/

    and "recommeded rollback to 362.00 as stable"

    facts

    On top of that pile is irony that "game ready drivers" didnt do squat for performance in "game ready" games they listed so it was a pointless upgrade in itself as well as pointless driver in itself, driver release for sake of releasing drivers, pure PR.
    Regardless of what you think, what I stated is indeed true.  I'm not parroting stuff I've heard... this is my actual experience.  Yours may differ but my experience is as valid as anyone else's.  You should check your attitude :)
  • RidelynnRidelynn Member EpicPosts: 7,060
    edited March 2016
    Xyire said:
    As I said, my opinions are entirely based on my personal experience so your cry of pr bs holds no weight.  In the 7 Nvidia cards my friends and I have owned never once had a driver problem.  In the 5 AMD cards my friends and I have owned, I've had driver or over heating (caused by a driver update in 1 case) problems with EVERY card.  
    A couple of interesting things to note:

    Except in rare circumstances, the GPU manufacturer doesn't make the card. They only provide a reference design and the GPU itself, and leave it to third parties to actually manufacture the cards. So if your comparing a "good" manufacturer of Type Green versus a "bad" manufacturer of Type Red - that really isn't nVidia versus AMD at that point, that's card manufacturer vs card manufacturer. "Over heating" is very commonly an overly aggressive overclock or improper installation (the fault of the user), or the fault of the manufacturer in failing cooling system which is often covered under warranty.

    Both AMD and nVidia have driver problems. Both have caused hardware issues with driver updates at some point in their past. Both have had periods where they have failed to provide timely updates. I pretty much discredit anyone on sight once they pull the "AMD driver" card, because that hasn't been true in so long that it isn't even slightly funny any more.

    It's also very easy to compare, say, a R7 390X versus a GT 950 and just throw out a blanket statement that Team Red runs hotter or noisier than Team Green. That falls squarely in the "No sh#t Sherlock category", a 200W card is gonna put out more heat than a 50W card, and that's true no matter who makes the chip. And you can't pull the "AMD is always run hotter/louder" card on that either, lest you forget Fermi.
  • TorvalTorval Member LegendaryPosts: 19,986
    edited March 2016
    I love my GTX970. It's quiet, seems to run cool, and I like the lower TDP compared to its AMD peers. But to say that Nvidia hasn't had driver problems, especially since the 900 series isn't very honest. I've seen some annoying driver issues. They've been resolved fairly quickly, but especially the early drivers along with GeForce Experience were pretty rough.

    While I mostly use Nvidia, I'm not a team green guy. I've had good and junk cards by both makers in the past. I would love to see AMD really push Nvidia this generation. It would only benefit us.

    I love that AMD is an early adopter for HBM. The sooner that configuration matures and drops in cost the better. HBM2 is going to be huge. I would love for my next card to be an AMD powerhouse.
    Fedora - A modern, free, and open source Operating System. https://getfedora.org/

    traveller, interloper, anomaly, iteration


  • wandericawanderica Member UncommonPosts: 366
    edited March 2016
    Don't forget too that DX12 puts a lot of the performance back in the hands of the developer.  DX12 benchmarks are going to be all over the map for a while.  We're already seeing it.  Tomb Raider benchmarks actually got WORSE on DX12, and that shouldn't happen. 

    Last thing of note, and this is very recent news, is that nVidia has made the GamesWorks source code public.  This is huge for AMD.  GamesWorks has been a very large thorn in ATi's side for a while now, but with it going public, they (AMD) have no excuse.  I have no doubt that nVidia will figure Async. Compute out in due time.  It looks like the GPU wars are heating up again, and I think that's a great thing.  Being stuck at 28 nm for so long forced both camps to push the boundaries of what they were capable of, and it's been cutthroat and weird for a while.  14 / 16 nm, and all that comes with it, will benefit us gamers greatly.


  • H0urg1assH0urg1ass Member EpicPosts: 2,380
    Malabooga said:
    H0urg1ass said:
    I have a confession to make.  I'm a fanboy of a particular GPU manufacturer... whichever one is making the best card for the best price.
    Thats OK if youre objective about it and dont move goalposts on every turn.

    For instance:

    Power efficiency was "most important GPU feature EVER"

    now when you have to heavily OC NVidias GPUs its other way around (exterme case 980ti and Titan X/Fury X) when you have to put huge OC/waterblocks on NVidias GPUs (which will add hefty premium cost AND make them draw 400-500W) to match Fury "its all about performance and who cares about power draw, only peasants care about it"

    Yah, it would be nice to have majority on objective side and with real information but its rarely so :)

    And i dont think people realize what it really means. How will NVidia fare if they have to sell 980 at price of 390 and 970 at price of 380?
    What goalpost?  I check the Tom's Hardware and Overclockers sites and see what the new cards are benchmarking on the games I want to play.  Then I choose whichever one is best at those games for the price.

    Yes, I do OC my CPU's and GPU's, but I was gonna do that anyways whether or not it puts one card over another.  Why run a 4570K at 3.5 when you can run it at 4.8?  The performance difference is huge.

    I've swapped back and forth between manufacturers whenever it suited my needs.  This whole being locked into one brand is about as retarded as Team Jacob and Team Edward.  I'm on team me, and I pick what's best for my system.
  • MalaboogaMalabooga Member UncommonPosts: 2,977
    edited March 2016
    H0urg1ass said:
    Malabooga said:
    H0urg1ass said:
    I have a confession to make.  I'm a fanboy of a particular GPU manufacturer... whichever one is making the best card for the best price.
    Thats OK if youre objective about it and dont move goalposts on every turn.

    For instance:

    Power efficiency was "most important GPU feature EVER"

    now when you have to heavily OC NVidias GPUs its other way around (exterme case 980ti and Titan X/Fury X) when you have to put huge OC/waterblocks on NVidias GPUs (which will add hefty premium cost AND make them draw 400-500W) to match Fury "its all about performance and who cares about power draw, only peasants care about it"

    Yah, it would be nice to have majority on objective side and with real information but its rarely so :)

    And i dont think people realize what it really means. How will NVidia fare if they have to sell 980 at price of 390 and 970 at price of 380?
    What goalpost?  I check the Tom's Hardware and Overclockers sites and see what the new cards are benchmarking on the games I want to play.  Then I choose whichever one is best at those games for the price.

    Yes, I do OC my CPU's and GPU's, but I was gonna do that anyways whether or not it puts one card over another.  Why run a 4570K at 3.5 when you can run it at 4.8?  The performance difference is huge.

    I've swapped back and forth between manufacturers whenever it suited my needs.  This whole being locked into one brand is about as retarded as Team Jacob and Team Edward.  I'm on team me, and I pick what's best for my system.
    I dont mean you, just people in general. There are actually people here that think OCing is bad and will make your computer explode :) SO be extra careful and wear protective coltes!

    The number of times ive seen maxwell power draw used....and i have NEVER seen disclaimer that your GPU will run much slower than 99% of benchmarks out there which all use heavy OCed cards which use 50% more power (and produce accompnying heat) than advertised. ITX GTX970 throttles quite a bit for instance. If Maxwell were so great it would run without problems with such cooling solution. But in reality it doesnt. AMD and NVidia cooling solutions are pretty much the same.

    Thats why i insist on objectivity and real information instead PR that is floating around. You can certainly undervolt/underclock AMD card to use much less power (and run slower) than they do. Does that make them better than NVidia? lol (in fact you can undervolt AMD cards quite a bit without performance loss, just the other day we instructed one layman how to do it on his new R9 390 and he was able to push -100mV -10% which makes his 390 draw 190W without any performance loss. When you  know that you need ~210W on GTX970 to match that performance level in DX11....kinda puts it in perspective. DX12 is whole another level). 390 just having 4,5 GB VRAM more than GTX970 is flat +25w.

    Would you now, when you have facts, champion "power efficiency"? Would anyone? And yet they do all the time.

    Ridelynn said:
     I pretty much discredit anyone on sight once they pull the "AMD driver" card, because that hasn't been true in so long that it isn't even slightly funny any more.

    This.

    And for AIBs, they got fully free reign over GPU production around Fermi, before that distributers just put their stickers on GPUs and they were allowed to do custom designs only on rare occasions and if you wanted 3rd party cooling solution you had to install it yourself (legendary accelero). But Fermi ran so hot that they said "OK guys you can do what you want just make it work...somehow" :)

    And they did, and since then, cooling and heat has not been an issue unless AIB screws it up (few did on occassion like Asus, XFX and Gainward)
    Post edited by Malabooga on
Sign In or Register to comment.