Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!

Rumor: Nvidia’s Pascal Architecture Is In Trouble With Asynchronous Compute

The user and all related content has been deleted.

거북이는 목을 내밀 때 안 움직입니다












«13

Comments

  • MalaboogaMalabooga Member UncommonPosts: 2,977
    edited March 2016
    This is the thing:

    When you need 20% more powerful hardware to compete because your own hardware is less efficient...youre in trouble. You can argue that NVidia has enough money to sell cards at a loss, but thats pretty bad for them.

    2nd thing is that if they bring back FP64 capable chips huge chunks of their power efficiency is gone (as main reason they used somewhat less power is they cut out portions of the chip, namely FP64 double precision part). Its also why they still use Kepler in some of their proffesional GPUs as Kepler isnt gimped chip.

    And i said this same thing almost a year ago, as soon as Stardock went public with info (inspite being threatened and blackmailed by NVidia who still claimed their cards can do async. In fact this could be another class action law suit against NVidia just like 3,5GB GTX970)
  • XyireXyire Member UncommonPosts: 152
    There is the point that dx12 isn't well adopted yet as its pretty new.  Looks like nvidia cards will suffer comparatively in dx12 games for now.  Of course, by the next gen card that presumably will have async capabilities, dx12 will be much more mainstream.  Not being the faster card on dx12 games for 1 card generation isn't going to hurt them - it will only matter if  it keeps up... especially if they are faster on dx11 this generation.

    I think most people don't buy nvidia cards cause they are the fastest cards anyway, they usually buy them because they trust the brand more.
  • pingopingo Member UncommonPosts: 608
    As someone who has always been a Nvidia guy, this could be good news if ATi gets a massive push. We need competition and viable pricing war to give us benefits. Too many generation of radeon cards have been lacking behind.


    Now I just wish they would also do mad gains in the CPU space. If AMD chips could somehow find a power gap to force some competition on Intel, that would be amazing.
  • MalaboogaMalabooga Member UncommonPosts: 2,977
    edited March 2016
    Xyire said:
    There is the point that dx12 isn't well adopted yet as its pretty new.  Looks like nvidia cards will suffer comparatively in dx12 games for now.  Of course, by the next gen card that presumably will have async capabilities, dx12 will be much more mainstream.  Not being the faster card on dx12 games for 1 card generation isn't going to hurt them - it will only matter if  it keeps up... especially if they are faster on dx11 this generation.

    I think most people don't buy nvidia cards cause they are the fastest cards anyway, they usually buy them because they trust the brand more.
    Nvidia doesnt gain or lose anything with DX12, for now its same on dx11 and dx12. AMD gains up to 20%. However, NVidia actually loses performance with async compute because overhead of software emulation costs more than it gains.

    DX12 is gaining huge adoption, pretty much all AAA games i have heard of coming out in 2016 will be dx12. And MS wants it badly and MS>>>>NVidia. Also 50m consoles use API like DX12/Vulkan and async compute (thats why games work much better on consoles than on comparable PC hardware)

    Win10 is pretty much no.1 OS for gamers (surpassed win7 in march)

    http://store.steampowered.com/hwsurvey/directx/

    And Vulkan is the same as its based on Mantle and brings best parts (less CPU overhead and async compute) to the table - as DX12 does. NVidia wont have DX12 GPU until 2018. Thats a looooooooong time. AMD lost 20% of market in 1 generation of GPUs, basically in less than a year.

    People buy NVidia cards because of PR. Thats one area where NVidia is superior (and they have money to cheat). But that has also backfired on them like gimpworks. You just cannot push nonsensical stutff on people. Natural selection will filter out good stuff and discard the rest. Pretty much only gimpworks feature that has seen wider usage is HBAO. Rest has pretty much been discarded by devs when they saw how badly it performs on ANY hardware (khm *hairworks* khm) and they cannot do anything about it since it is black box provided by NVidia.

    What dev wants to see its game drag itself on 1080p on 600+$ GPU because of 1 tick? Its bad for their business.
  • RidelynnRidelynn Member EpicPosts: 7,383
    Honestly,

    I'm just not that worried about async compute or DX12. Nothing I play right now uses it. Once something does come along that I want to play that supports it, if it's important to me at the time, I'll take a look at what's on the market then.

    But right now, it just seems like DX12/Vulcan/Async/Mantle whatever has been "ready to go" for a long time now, since even before Win10. And a few titles support it, but not many. And of those, most are not all that well (cough* ark *cough).

    Don't get me wrong, I giggle a bit because it's one area where AMD beats the snot out of nVidia, and I'm a fan of the underdog. But it just isn't that important right now, especially since Pascal, for all intents and purposes, is still firmly in the "Vaporware" category.
  • Leon1eLeon1e Member UncommonPosts: 791
    edited March 2016
    Idk what those guys at wccf tech are smoking but nvidia actually supports less features in Dx12 than AMD. On part because AMD actually has experience with this kind of things (e.g. Mantle, which has HUGE weight in the Dx12 specification and vulkan is basically mantle 2.0) 

    Here's a table: http://i.imgur.com/aAqqZYo.png
  • H0urg1assH0urg1ass Member EpicPosts: 2,380
    I have a confession to make.  I'm a fanboy of a particular GPU manufacturer... whichever one is making the best card for the best price.
  • XyireXyire Member UncommonPosts: 152
    Malabooga said:
    Xyire said:
    There is the point that dx12 isn't well adopted yet as its pretty new.  Looks like nvidia cards will suffer comparatively in dx12 games for now.  Of course, by the next gen card that presumably will have async capabilities, dx12 will be much more mainstream.  Not being the faster card on dx12 games for 1 card generation isn't going to hurt them - it will only matter if  it keeps up... especially if they are faster on dx11 this generation.

    I think most people don't buy nvidia cards cause they are the fastest cards anyway, they usually buy them because they trust the brand more.
    Nvidia doesnt gain or lose anything with DX12, for now its same on dx11 and dx12. AMD gains up to 20%. However, NVidia actually loses performance with async compute because overhead of software emulation costs more than it gains.

    DX12 is gaining huge adoption, pretty much all AAA games i have heard of coming out in 2016 will be dx12. And MS wants it badly and MS>>>>NVidia. Also 50m consoles use API like DX12/Vulkan and async compute (thats why games work much better on consoles than on comparable PC hardware)

    Win10 is pretty much no.1 OS for gamers (surpassed win7 in march)

    http://store.steampowered.com/hwsurvey/directx/

    And Vulkan is the same as its based on Mantle and brings best parts (less CPU overhead and async compute) to the table - as DX12 does. NVidia wont have DX12 GPU until 2018. Thats a looooooooong time. AMD lost 20% of market in 1 generation of GPUs, basically in less than a year.

    People buy NVidia cards because of PR. Thats one area where NVidia is superior (and they have money to cheat). But that has also backfired on them like gimpworks. You just cannot push nonsensical stutff on people. Natural selection will filter out good stuff and discard the rest. Pretty much only gimpworks feature that has seen wider usage is HBAO. Rest has pretty much been discarded by devs when they saw how badly it performs on ANY hardware (khm *hairworks* khm) and they cannot do anything about it since it is black box provided by NVidia.

    What dev wants to see its game drag itself on 1080p on 600+$ GPU because of 1 tick? Its bad for their business.
    While I agree dx12 will get adopted widely, currently few games support it.  All new games may adopt dx12, but the vast majority of games out there will still be dx11 as most older games will slowly convert or choose not to convert at all (depending on the style of game).

    Transitions take time, dx12 won't be in the majority of games for a while.  I just don't think the average gamer buying a card will be affected enough by comparative slowness that's limited to dx12 to really put a dent in nvidia market share.  

    As for consoles, unless a new console is coming out soon, I doubt there will be hardware changes associated with this. So consoles won't affect market shares for a while.

    Overall I think that depending on what nvidia does in the future, this may not affect them much at all.  Of course if they're too slow to get a competitive async card up and running they could tank horridly, but I doubt that's likely.
  • MalaboogaMalabooga Member UncommonPosts: 2,977
    edited March 2016
    H0urg1ass said:
    I have a confession to make.  I'm a fanboy of a particular GPU manufacturer... whichever one is making the best card for the best price.
    Thats OK if youre objective about it and dont move goalposts on every turn.

    For instance:

    Power efficiency was "most important GPU feature EVER"

    now when you have to heavily OC NVidias GPUs its other way around (exterme case 980ti and Titan X/Fury X) when you have to put huge OC/waterblocks on NVidias GPUs (which will add hefty premium cost AND make them draw 400-500W) to match Fury "its all about performance and who cares about power draw, only peasants care about it"

    Yah, it would be nice to have majority on objective side and with real information but its rarely so :)

    And i dont think people realize what it really means. How will NVidia fare if they have to sell 980 at price of 390 and 970 at price of 380?
  • XyireXyire Member UncommonPosts: 152
    edited March 2016
    And i dont think people realize what it really means. How will NVidia fare if they have to sell 980 at price of 390 and 970 at price of 380?
    I'm pretty sure that's just not happening.  Nvidia will just quote dx11 performance and continue business as usual.  As you said previously, their marketing is strong.
  • HrimnirHrimnir Member RarePosts: 2,415
    edited March 2016
    As usual a bunch of people hand wringing and waving their arms about over what is basically nothing.

    This article BTW is over 6 months old:

    http://www.extremetech.com/extreme/213519-asynchronous-shading-amd-nvidia-and-dx12-what-we-know-so-far

    From about half an hour of cursory research, this while thing comes from Ashes of Singularity, which even the freaking developer of the game admitted isn't the best test for this, on pre release drivers, etc.

    I'll worry about it when there is more than one DX12 game that can verify this as being true.  Until then, it's meaningless tripe.

    "The surest way to corrupt a youth is to instruct him to hold in higher esteem those who think alike than those who think differently."

    - Friedrich Nietzsche

  • XyireXyire Member UncommonPosts: 152
    Malabooga said:
    H0urg1ass said:
    I have a confession to make.  I'm a fanboy of a particular GPU manufacturer... whichever one is making the best card for the best price.
    Thats OK if youre objective about it and dont move goalposts on every turn.
    I think part of the problem here is that what people value in a card actually differs.  You may be hearing different stories from different people.  For instance, Amd cards tend to be faster and cheaper than nvidia.  They also tend to have more driver problems.  

    If you value fast and cheap, AMD is the best card out there.  If you value decent performance with no driver issues for more money, Nvidia's cards would be the best out there.

    Obviously in your example, the first person valued power consumption while the second total max speed.  I find it unlikely that those 2 statements came from a single individual.

    Disclaimer: my beliefs about amd being faster and cheaper with more driver problems come from my personal experience with amd and nvidia cards.
  • MalaboogaMalabooga Member UncommonPosts: 2,977
    edited March 2016
    Lol, nope, you see, both have driver problems. Thats part of PR BS thats floating around. Ive actually started personal list of games i own and play with NVidia driver related issues on my local forum because im fed up listening about "super drivers". Yeah, no lol. If you dont want driver issues buy a console.

    And actually one guy on this site said he had changed over to Fermi because his ATI card was running hot.

    In my example its same group of people. Even on this site few people still say that. Especially funny when people talk about power efficiency and then link benchmarks with heavily OCed cards.
  • MalaboogaMalabooga Member UncommonPosts: 2,977
    edited March 2016
    Hrimnir said:
    As usual a bunch of people hand wringing and waving their arms about over what is basically nothing.

    This article BTW is over 6 months old:

    http://www.extremetech.com/extreme/213519-asynchronous-shading-amd-nvidia-and-dx12-what-we-know-so-far

    From about half an hour of cursory research, this while thing comes from Ashes of Singularity, which even the freaking developer of the game admitted isn't the best test for this, on pre release drivers, etc.

    I'll worry about it when there is more than one DX12 game that can verify this as being true.  Until then, it's meaningless tripe.
    And then 8 months later, game releases in 1 week and NVidia has already released game ready driver:

    https://www.guru3d.com/articles_pages/ashes_of_singularity_directx_12_benchmark_ii_review,7.html

    Yep, NVidia has BSed a lot back then, 8 months and.....nothing, DX12 games releasing.

    Have you been living under a rock?

    http://www.pcgameshardware.de/Hitman-Spiel-6333/Specials/DirectX-12-Benchmark-Test-1188758/
  • XyireXyire Member UncommonPosts: 152
    Malabooga said:
    Lol, nope, you see, both have driver problems. Thats part of PR BS thats floating around. Ive actually started personal list of games i own and play with NVidia driver related issues on my local forum because im fed up listening about "super drivers". Yeah, no lol. If you dont want driver issues buy a console.

    And actually one guy on this site said he had changed over to Fermi because his ATI card was running hot.

    In my example its same group of people. Even on this site few people still say that. Especially funny when people talk about power efficiency and then link benchmarks with heavily OCed cards.
    As I said, my opinions are entirely based on my personal experience so your cry of pr bs holds no weight.  In the 7 Nvidia cards my friends and I have owned never once had a driver problem.  In the 5 AMD cards my friends and I have owned, I've had driver or over heating (caused by a driver update in 1 case) problems with EVERY card.  

    Maybe my experience is not a representative sample, but my opinions are backed with first hand data.  Trying to say that this is PR BS is actually pushing a pro AMD agenda.  It's ok when you do it? I think everyone should stick to facts and not say dissenters are wrong because of their personal preferences.

    I think I was very fair to both parties in my statements.  
  • MalaboogaMalabooga Member UncommonPosts: 2,977
    edited March 2016
    Pushing pro AMD agenda? rofl I have NVidia GPU currently.

    in 7 nvidia cards you didnt have a single problem? Riiiiiiiiiiiiiiiiiiiiiiiiiiiiight.

    You werent fair. Just go to nvidias forums to see how their drivers fare. Thats FACTS. Your personal preferences arent built on facts.

    Just the latest:

    http://www.theinquirer.net/inquirer/news/2450200/nvidia-geforce-36447-gpu-driver-is-borking-gamers-pcs

    http://techfrag.com/2016/03/09/nvidia-364-51-drivers-didnt-root-out-all-problems-crashes-and-display-glitches-persist/

    and "recommeded rollback to 362.00 as stable"

    facts

    On top of that pile is irony that "game ready drivers" didnt do squat for performance in "game ready" games they listed so it was a pointless upgrade in itself as well as pointless driver in itself, driver release for sake of releasing drivers, pure PR.
  • DahkohtDahkoht Member UncommonPosts: 479
    "This is the year AMD takes it to Nvidia"

    Said every year for a long time...... 

    Yet ones stock is worth over 30 , while the other can't break 3




  • QuizzicalQuizzical Member LegendaryPosts: 25,351
    Dahkoht said:
    "This is the year AMD takes it to Nvidia"

    Said every year for a long time...... 

    Yet ones stock is worth over 30 , while the other can't break 3




    AMD's financial troubles are due to Bulldozer being a disaster and then its successors not being competitive.  If Zen is great, AMD will be fine, no matter how bad Polaris is.  And if Zen is a Bulldozer-scale disaster, AMD will be headed for bankruptcy even if Polaris is massively better than than Pascal in every way you can think of and most of the ones you can't.

    Nvidia having a clear advantage over AMD in discrete GPUs is a recent phenomenon, really starting with Maxwell about a year and a half ago in desktops, and a little before that in laptops.  We're not that far removed from AMD being clearly ahead for a few years, starting at least by the launch of Cypress in 2009 if not RV770 in 2008, and not ending until Kepler arrived in 2012.  Every generation is a new chance to compete, and both Nvidia and AMD/ATI have won quite a few over the years.
  • XyireXyire Member UncommonPosts: 152
    Malabooga said:
    Pushing pro AMD agenda? rofl I have NVidia GPU currently.

    in 7 nvidia cards you didnt have a single problem? Riiiiiiiiiiiiiiiiiiiiiiiiiiiiight.

    You werent fair. Just go to nvidias forums to see how their drivers fare. Thats FACTS. Your personal preferences arent built on facts.

    Just the latest:

    http://www.theinquirer.net/inquirer/news/2450200/nvidia-geforce-36447-gpu-driver-is-borking-gamers-pcs

    http://techfrag.com/2016/03/09/nvidia-364-51-drivers-didnt-root-out-all-problems-crashes-and-display-glitches-persist/

    and "recommeded rollback to 362.00 as stable"

    facts

    On top of that pile is irony that "game ready drivers" didnt do squat for performance in "game ready" games they listed so it was a pointless upgrade in itself as well as pointless driver in itself, driver release for sake of releasing drivers, pure PR.
    Regardless of what you think, what I stated is indeed true.  I'm not parroting stuff I've heard... this is my actual experience.  Yours may differ but my experience is as valid as anyone else's.  You should check your attitude :)
  • RidelynnRidelynn Member EpicPosts: 7,383
    edited March 2016
    Xyire said:
    As I said, my opinions are entirely based on my personal experience so your cry of pr bs holds no weight.  In the 7 Nvidia cards my friends and I have owned never once had a driver problem.  In the 5 AMD cards my friends and I have owned, I've had driver or over heating (caused by a driver update in 1 case) problems with EVERY card.  
    A couple of interesting things to note:

    Except in rare circumstances, the GPU manufacturer doesn't make the card. They only provide a reference design and the GPU itself, and leave it to third parties to actually manufacture the cards. So if your comparing a "good" manufacturer of Type Green versus a "bad" manufacturer of Type Red - that really isn't nVidia versus AMD at that point, that's card manufacturer vs card manufacturer. "Over heating" is very commonly an overly aggressive overclock or improper installation (the fault of the user), or the fault of the manufacturer in failing cooling system which is often covered under warranty.

    Both AMD and nVidia have driver problems. Both have caused hardware issues with driver updates at some point in their past. Both have had periods where they have failed to provide timely updates. I pretty much discredit anyone on sight once they pull the "AMD driver" card, because that hasn't been true in so long that it isn't even slightly funny any more.

    It's also very easy to compare, say, a R7 390X versus a GT 950 and just throw out a blanket statement that Team Red runs hotter or noisier than Team Green. That falls squarely in the "No sh#t Sherlock category", a 200W card is gonna put out more heat than a 50W card, and that's true no matter who makes the chip. And you can't pull the "AMD is always run hotter/louder" card on that either, lest you forget Fermi.
  • wandericawanderica Member UncommonPosts: 370
    edited March 2016
    Don't forget too that DX12 puts a lot of the performance back in the hands of the developer.  DX12 benchmarks are going to be all over the map for a while.  We're already seeing it.  Tomb Raider benchmarks actually got WORSE on DX12, and that shouldn't happen. 

    Last thing of note, and this is very recent news, is that nVidia has made the GamesWorks source code public.  This is huge for AMD.  GamesWorks has been a very large thorn in ATi's side for a while now, but with it going public, they (AMD) have no excuse.  I have no doubt that nVidia will figure Async. Compute out in due time.  It looks like the GPU wars are heating up again, and I think that's a great thing.  Being stuck at 28 nm for so long forced both camps to push the boundaries of what they were capable of, and it's been cutthroat and weird for a while.  14 / 16 nm, and all that comes with it, will benefit us gamers greatly.


  • H0urg1assH0urg1ass Member EpicPosts: 2,380
    Malabooga said:
    H0urg1ass said:
    I have a confession to make.  I'm a fanboy of a particular GPU manufacturer... whichever one is making the best card for the best price.
    Thats OK if youre objective about it and dont move goalposts on every turn.

    For instance:

    Power efficiency was "most important GPU feature EVER"

    now when you have to heavily OC NVidias GPUs its other way around (exterme case 980ti and Titan X/Fury X) when you have to put huge OC/waterblocks on NVidias GPUs (which will add hefty premium cost AND make them draw 400-500W) to match Fury "its all about performance and who cares about power draw, only peasants care about it"

    Yah, it would be nice to have majority on objective side and with real information but its rarely so :)

    And i dont think people realize what it really means. How will NVidia fare if they have to sell 980 at price of 390 and 970 at price of 380?
    What goalpost?  I check the Tom's Hardware and Overclockers sites and see what the new cards are benchmarking on the games I want to play.  Then I choose whichever one is best at those games for the price.

    Yes, I do OC my CPU's and GPU's, but I was gonna do that anyways whether or not it puts one card over another.  Why run a 4570K at 3.5 when you can run it at 4.8?  The performance difference is huge.

    I've swapped back and forth between manufacturers whenever it suited my needs.  This whole being locked into one brand is about as retarded as Team Jacob and Team Edward.  I'm on team me, and I pick what's best for my system.
  • MalaboogaMalabooga Member UncommonPosts: 2,977
    edited March 2016
    H0urg1ass said:
    Malabooga said:
    H0urg1ass said:
    I have a confession to make.  I'm a fanboy of a particular GPU manufacturer... whichever one is making the best card for the best price.
    Thats OK if youre objective about it and dont move goalposts on every turn.

    For instance:

    Power efficiency was "most important GPU feature EVER"

    now when you have to heavily OC NVidias GPUs its other way around (exterme case 980ti and Titan X/Fury X) when you have to put huge OC/waterblocks on NVidias GPUs (which will add hefty premium cost AND make them draw 400-500W) to match Fury "its all about performance and who cares about power draw, only peasants care about it"

    Yah, it would be nice to have majority on objective side and with real information but its rarely so :)

    And i dont think people realize what it really means. How will NVidia fare if they have to sell 980 at price of 390 and 970 at price of 380?
    What goalpost?  I check the Tom's Hardware and Overclockers sites and see what the new cards are benchmarking on the games I want to play.  Then I choose whichever one is best at those games for the price.

    Yes, I do OC my CPU's and GPU's, but I was gonna do that anyways whether or not it puts one card over another.  Why run a 4570K at 3.5 when you can run it at 4.8?  The performance difference is huge.

    I've swapped back and forth between manufacturers whenever it suited my needs.  This whole being locked into one brand is about as retarded as Team Jacob and Team Edward.  I'm on team me, and I pick what's best for my system.
    I dont mean you, just people in general. There are actually people here that think OCing is bad and will make your computer explode :) SO be extra careful and wear protective coltes!

    The number of times ive seen maxwell power draw used....and i have NEVER seen disclaimer that your GPU will run much slower than 99% of benchmarks out there which all use heavy OCed cards which use 50% more power (and produce accompnying heat) than advertised. ITX GTX970 throttles quite a bit for instance. If Maxwell were so great it would run without problems with such cooling solution. But in reality it doesnt. AMD and NVidia cooling solutions are pretty much the same.

    Thats why i insist on objectivity and real information instead PR that is floating around. You can certainly undervolt/underclock AMD card to use much less power (and run slower) than they do. Does that make them better than NVidia? lol (in fact you can undervolt AMD cards quite a bit without performance loss, just the other day we instructed one layman how to do it on his new R9 390 and he was able to push -100mV -10% which makes his 390 draw 190W without any performance loss. When you  know that you need ~210W on GTX970 to match that performance level in DX11....kinda puts it in perspective. DX12 is whole another level). 390 just having 4,5 GB VRAM more than GTX970 is flat +25w.

    Would you now, when you have facts, champion "power efficiency"? Would anyone? And yet they do all the time.

    Ridelynn said:
     I pretty much discredit anyone on sight once they pull the "AMD driver" card, because that hasn't been true in so long that it isn't even slightly funny any more.

    This.

    And for AIBs, they got fully free reign over GPU production around Fermi, before that distributers just put their stickers on GPUs and they were allowed to do custom designs only on rare occasions and if you wanted 3rd party cooling solution you had to install it yourself (legendary accelero). But Fermi ran so hot that they said "OK guys you can do what you want just make it work...somehow" :)

    And they did, and since then, cooling and heat has not been an issue unless AIB screws it up (few did on occassion like Asus, XFX and Gainward)
    Post edited by Malabooga on
  • MalaboogaMalabooga Member UncommonPosts: 2,977
    edited March 2016
    wanderica said:
    Don't forget too that DX12 puts a lot of the performance back in the hands of the developer.  DX12 benchmarks are going to be all over the map for a while.  We're already seeing it.  Tomb Raider benchmarks actually got WORSE on DX12, and that shouldn't happen. 

    Last thing of note, and this is very recent news, is that nVidia has made the GamesWorks source code public.  This is huge for AMD.  GamesWorks has been a very large thorn in ATi's side for a while now, but with it going public, they (AMD) have no excuse.  I have no doubt that nVidia will figure Async. Compute out in due time.  It looks like the GPU wars are heating up again, and I think that's a great thing.  Being stuck at 28 nm for so long forced both camps to push the boundaries of what they were capable of, and it's been cutthroat and weird for a while.  14 / 16 nm, and all that comes with it, will benefit us gamers greatly.
    ROTR is proof that if you dont put DX12 features that actually make game run better (async compute) is as good as previous DXs were. Some fluff and thats it. Its ironic really, consoles use async in ROTR and run much better than comparable PC hardware. OTOH, 12.1 "features" can be run (and poorly at that) only on 650+$ GPU which makes them irrelevant for 99% of people. Though ROTR devs said that they would bring back async back to PC version eventually but until that happens ROTR is just DX11 game with DX12 tag bolted on (just like GOWU)

    As far as gameworks goes, it isnt as open as it seems at first, lots of clauses in contract you have to sign to use it. We will have to wait and see how it works in practice. Community has AMDs GPUOpen as completely open platform, if gameworks stuff continues to run so poorly, its bye bye gameworks.

    AMD CANNOT access gameworks source code or do anything with it. You got that wrong. As i said, its questionable what exactly developers can do with it except look at it.

    And yes, its developers responsibility to make game work at its best. We see that when DX12/Vulkan is implemented correctly it gives nice boost to DX12 capable hardware for free. And i must confess that i like free performance boosts
    Post edited by Malabooga on
  • QuizzicalQuizzical Member LegendaryPosts: 25,351
    H0urg1ass said:
    Malabooga said:
    H0urg1ass said:
    I have a confession to make.  I'm a fanboy of a particular GPU manufacturer... whichever one is making the best card for the best price.
    Thats OK if youre objective about it and dont move goalposts on every turn.

    For instance:

    Power efficiency was "most important GPU feature EVER"

    now when you have to heavily OC NVidias GPUs its other way around (exterme case 980ti and Titan X/Fury X) when you have to put huge OC/waterblocks on NVidias GPUs (which will add hefty premium cost AND make them draw 400-500W) to match Fury "its all about performance and who cares about power draw, only peasants care about it"

    Yah, it would be nice to have majority on objective side and with real information but its rarely so :)

    And i dont think people realize what it really means. How will NVidia fare if they have to sell 980 at price of 390 and 970 at price of 380?
    What goalpost?  I check the Tom's Hardware and Overclockers sites and see what the new cards are benchmarking on the games I want to play.  Then I choose whichever one is best at those games for the price.

    Yes, I do OC my CPU's and GPU's, but I was gonna do that anyways whether or not it puts one card over another.  Why run a 4570K at 3.5 when you can run it at 4.8?  The performance difference is huge.

    I've swapped back and forth between manufacturers whenever it suited my needs.  This whole being locked into one brand is about as retarded as Team Jacob and Team Edward.  I'm on team me, and I pick what's best for my system.
    I don't think he's accusing you of anything.  What sometimes happens is:

    One generation, Nvidia is better at A and AMD is better at B.  Person says buy Nvidia because it's better at A.

    Next generation, Nvidia is better at B and AMD is better at A.  Same person as before says buy Nvidia because it's better at B.

    That constitutes moving the goalposts if there aren't some serious caveats.  There are a lot of possible caveats, of course.

    Even if we're just talking about energy efficiency, someone could have bought a GeForce GTX 480, regretted it because it ran so hot, and then discovered that energy efficiency matters before making his next purchase.

    I personally tend to make the case that the difference between 300 W and 200 W matters a lot more than the difference between 75 W and 50 W.  And also that energy efficiency in a laptop is a huge deal, but not that important in a desktop.

    I've been recommending Nvidia Maxwell cards for laptops for about the last two years, and for the same reason that I recommended AMD over Nvidia in laptops in 2010-2011:  energy efficiency.  If either Polaris or Pascal comes out substantially before the other for laptops, I'll presumably recommend that one for laptops until the other is available.
Sign In or Register to comment.