Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!

AMD Radeon RX 480 Graphics Card With Polaris 10 Leaked – 5.5 TFLOPs Compute, 8 GB GDDR5 Memory

13»

Comments

  • PrecusorPrecusor Member UncommonPosts: 3,589
    edited June 2016
    Time to buy AMD GPUs.
  • MalaboogaMalabooga Member UncommonPosts: 2,977
    edited June 2016
    Quizzical said:
    I object to your misuse of the word "core" here.  One could argue that the GPU equivalent of a CPU core is a graphics processing cluster or a compute unit or a sub-slice or a SIMD unit or a shader (yes, I'm deliberately mixing terminology from different vendors), but it's definitely not an entire chip.

    They could theoretically put two GPU dies into a single multi-chip module the way AMD and Intel have done at times with CPUs (e.g., AMD Magny-Cours or Intel Core 2 Quad) with a ton of bandwidth to connect the two on an interposer.  But you'd need crazy amounts of bandwidth connecting the two GPU dies for it to work well.  You'd only do that if either you're pushed hard in that direction because yields on a single large die are terrible or the single die you want is larger than foundries can physically manufacture (around 600 mm^2).  Neither of those are likely to be the case in consoles, as you'd end up with a console that is way too expensive and burns way too much power.

    On your second paragraph, that's not true.  If you have one GPU handle each eye, that will likely scale better than normal CrossFire/SLI.  Maybe two of card X is then 1.7 times as good as one of card X, rather than only 1.4 times as good.  But it's still nowhere near twice as good.

    If you have one big GPU handle everything, it can do a lot of geometry computations once (loosely, entire vertex shader up through most of the tessellation evaluation shader) to see where some vertex is relative to some point, then do separate computations for each eye after that.  If you have two smaller GPUs, everything after you split the computations for each eye scales well, but everything before it has to be replicated on each GPU.
    Actually that is exactly where a lot of people assume AMD is taking multiGPU - several GPU chips on an interposer as they hinted of doing something like that. And i dont think you understand, 2 small chips are cheaper than 1 larges chip of the same size. Consoles want as low cost for as much performance as they can muster. 600mm2 chip would be incredibly expensive. But 2 300mm2 ones would be half or even 1/3 the price (not that any of that would be in a consoles but 2x150mm2 chips instead 1 200-230mm for same price for instance). Also, its much easier to cool 2 smaller chips than 1 big.

    CF already scales around 70% on average in this barely supported eco-system. If theres more support (and deeper support) it could easily go to 80-90% on average. And thats where consoles come into play - if consoles have dual chips - engines and devs would support it 100%.

    But the ultimate trick is for API to see multichip setup as single GPU with combined resources with an option to split workload separately if its more beneficial (for VR as an examlple)

    2 GPUs is mostly for latency which is very important in VR.
    Post edited by Malabooga on
  • MalaboogaMalabooga Member UncommonPosts: 2,977
    edited June 2016
    acidblood said:
    filmoret said:
    Quizzical said:
    Malabooga said:
    filmoret said:
    Yea IDK exactly what is wrong but the image on the left seems like they have foilage turned down.  Which is where you can really see the difference in quality.  Another thing I dont understand is with dual cards they only needed to run at 58%.  Why not just run 1 card at 98% and it would match the 1080 which is also running at 98%?
    Well igues point they wanted to make was that you can get same performance for less with 2x480. Just look how NVidia marketed 1080 as "2xGTX980" and that was "bad" as GTX980 costs 450$ and 1080 is 599/699$

    But now AMD is showing that same GTX980 performance for 200$/400$. Also puts 1070 in the spotlight - 1070=400$ and 2x480 as fast as 1080 also=400$.
    One could argue that Nvidia was saying "buy one GTX 1080 instead of two GTX 980s", while AMD was saying, "buy two RX 480s instead of one GTX 1080".  Advice of "buy one card instead of two" is not equivalent to "buy two cards instead of one".  One faster cards is preferable to two slower cards.
    But if they get things working properly they could offer a third card. To make it even faster.  So people who cannot afford the 700$ card can just buy one at a time and eventually end up with the equlivant of something much better.  Then I guess Nvidia could do the same thing and you can buy two of the 700$ cards.  Man this is giving me a headache now...

    What I'm thinking is you get the two cards and later when another card comes out you can simply upgrade one of them.  So each upgrade will only cost you about 200$ instead of the 700$ for each Nvidia upgrade.  So with each upgrade you replace the oldest card and you end up with two gens of cards but it ends up being just as fast.  Then gain this probably doesn't work unless they get the interfacing drivers for such a thing.
    Generally doesn't work like that, as in you need two of the same (ideally identical) cards to use them in SLI / XF. Not sure if it's still an option, but SLI did have a thing where you could run one for physX and the other for rendering; I ran that setup for a while, but honestly the benefit was pretty small.

    Not saying the buy one card now and one later is a bad option (have done it in the past), but the other thing to consider is the size of the card and support from the motherboard / case / power supply. For example, technically I can fit 2 full size graphics cards in my case, and my MB / PSU is compatible, but it would mean having to take out a hard drive and blocking the 1x slot... so a single card solution is a better option in my case.

    Actually with DX12 multiadapter you can use any combination of GPUs. Ironically it turns out that AMD as master card and NVidia as slave card is best possible option in terms of perormance ;). Also, 2 cards of equal speed are recommended as slower card acts as anchor for faster (so in GTX960 and R9 390 setup 960 would drag performance to its level)

    Also one amusing thing about it is that 2 NVidia cards dont require SLI bridge to work in DX12 multiadapter which just means SLI bridges are utter nonsense.

    And, of course, devs need to support it, just as CF/SLI

    just google "Ashes of the Singularity DX12 multi adapter"
    Post edited by Malabooga on
  • MalaboogaMalabooga Member UncommonPosts: 2,977
    edited June 2016
    Malabooga said:
    They ran at same settings, where are you getting lower settings?

    here is Fury X on the same settings. You can also check videos of previous NVidia cards.



    what is actually becoming apparent is what was hinted by some reviewers that NVidia natively renders less detail in DX12 or 1080 has some texture rendering problems because ti didnt render it correctly (its actually 1080 that seems to render less detail). Because 480 was running details at extreme.

    You also have these

    http://www.ashesofthesingularity.com/metaverse#/personas/b0db0294-8cab-4399-8815-f956a670b68f/match-details/0561a980-78ce-4e24-a7c3-2749a8e33aac

    http://www.ashesofthesingularity.com/metaverse#/personas/d5d8a436-6309-4622-b9f0-c9c141052acd/match-details/f00df630-16f2-49bf-a475-241209ef4128

    And theres also this:

    "Ashes of the singularity uses some form of procedual generation for its texture generation ( aswell as unit composition/behavior to prevent driver cheats) which means that every game session and bench run will have various differences in some details."

    https://www.reddit.com/r/Amd/comments/4lz6ay/anyone_else_noticed_this_at_the_amd_livestream/d3rc5hv





    I like that this has surfaced because now that people have investigated it turns out that GTX1080 is in fact rendering the game incorretly becuase its different than previous AMD and NVidia cards, including RX 480.

    both cards were running these settings (confirmed by AMD)

    amd rx 480 crossfire vs nvidia gtx 1080 results


    BUT (oh, this is quite a turnaround)

    yet 1080 has "different" visuals than any other card like omitting some details (plenty of youtube videos you can compare it to)

    https://www.reddit.com/r/Amd/comments/4m692q/concerning_the_aots_image_quality_controversy/

    "At present the GTX 1080 is incorrectly executing the terrain shaders responsible for populating the environment with the appropriate amount of snow. The GTX 1080 is doing less work to render AOTS than it otherwise would if the shader were being run properly. Snow is somewhat flat and boring in color compared to shiny rocks, which gives the illusion that less is being rendered, but this is an incorrect interpretation of how the terrain shaders are functioning in this title."

    Nvidia cheating in benchmarks with "driver optimizations" by lowering IQ for better results? again?
    Post edited by Malabooga on
  • filmoretfilmoret Member EpicPosts: 4,906
    Malabooga said:
    Malabooga said:
    They ran at same settings, where are you getting lower settings?

    here is Fury X on the same settings. You can also check videos of previous NVidia cards.



    what is actually becoming apparent is what was hinted by some reviewers that NVidia natively renders less detail in DX12 or 1080 has some texture rendering problems because ti didnt render it correctly (its actually 1080 that seems to render less detail). Because 480 was running details at extreme.

    You also have these

    http://www.ashesofthesingularity.com/metaverse#/personas/b0db0294-8cab-4399-8815-f956a670b68f/match-details/0561a980-78ce-4e24-a7c3-2749a8e33aac

    http://www.ashesofthesingularity.com/metaverse#/personas/d5d8a436-6309-4622-b9f0-c9c141052acd/match-details/f00df630-16f2-49bf-a475-241209ef4128

    And theres also this:

    "Ashes of the singularity uses some form of procedual generation for its texture generation ( aswell as unit composition/behavior to prevent driver cheats) which means that every game session and bench run will have various differences in some details."

    https://www.reddit.com/r/Amd/comments/4lz6ay/anyone_else_noticed_this_at_the_amd_livestream/d3rc5hv





    I like that this has surfaced because now that people have investigated it turns out that GTX1080 is in fact rendering the game incorretly becuase its different than previous AMD and NVidia cards, including RX 480.

    both cards were running these settings (confirmed by AMD)

    amd rx 480 crossfire vs nvidia gtx 1080 results


    BUT (oh, this is quite a turnaround)

    yet 1080 has "different" visuals than any other card like omitting some details (plenty of youtube videos you can compare it to)

    https://www.reddit.com/r/Amd/comments/4m692q/concerning_the_aots_image_quality_controversy/

    "At present the GTX 1080 is incorrectly executing the terrain shaders responsible for populating the environment with the appropriate amount of snow. The GTX 1080 is doing less work to render AOTS than it otherwise would if the shader were being run properly. Snow is somewhat flat and boring in color compared to shiny rocks, which gives the illusion that less is being rendered, but this is an incorrect interpretation of how the terrain shaders are functioning in this title."

    Nvidia cheating in benchmarks with "driver optimizations" by lowering IQ for better results? again?
    Many bugs yet to be worked out aparently.  Considering the 1080 basically did a panic launch to save shareholders it is to be expected that the card needs driver updates.  This dual card option is looking like a good posibility for the future of computers.  If they can get the software right or if someone can write a universal program which dx12 aparently has done.  But one that works and not like that poor guy who only had success with 3 games and other games were actually worse with dual cards.
    Are you onto something or just on something?
  • heerobyaheerobya Member UncommonPosts: 465
    edited June 2016
    Recore said:
    AMD just confirmed the price of the 480 will be $199. 

    Over 5 TFOPS and made for VR. 

    And this is how the new PS4.5 and XB2 will have 4-5x the current power.

    They'll get cards such as this, custom for their boxes, for half this price.
  • MalaboogaMalabooga Member UncommonPosts: 2,977
    edited June 2016
    heerobya said:
    Recore said:
    AMD just confirmed the price of the 480 will be $199. 

    Over 5 TFOPS and made for VR. 

    And this is how the new PS4.5 and XB2 will have 4-5x the current power.

    They'll get cards such as this, custom for their boxes, for half this price.
    Well this is pretty much the chip that will go into PS 4k. Wheter it will be full or cutdown chip...well see.

    This has few benefits for everyone:

    - economics of scale - since they intend to sell tens of millions of these they can sell them cheaper
    - AAA games (and PS VR content) will be optimized specifically for this exact chip (neo is supposed to have 8GB of memory)

    win-win for everyone

    The best part was the ID Software guys...

    "We were not be able to be there at that lame launch, because we are hard at work improving Doom for hardware that matters, instead of a insignificant mainstream product."

    You mean for consoles, and it just happens that consoles have AMD GCN architecture and all the goodies like Async and intrisic shaders.
  • filmoretfilmoret Member EpicPosts: 4,906
    Malabooga said:
    heerobya said:
    Recore said:
    AMD just confirmed the price of the 480 will be $199. 

    Over 5 TFOPS and made for VR. 

    And this is how the new PS4.5 and XB2 will have 4-5x the current power.

    They'll get cards such as this, custom for their boxes, for half this price.
    Well this is pretty much the chip that will go into PS 4k. Wheter it will be full or cutdown chip...well see.

    This has few benefits for everyone:

    - economics of scale - since they intend to sell tens of millions of these they can sell them cheaper
    - AAA games (and PS VR content) will be optimized specifically for this exact chip (neo is supposed to have 8GB of memory)

    win-win for everyone

    The best part was the ID Software guys...

    "We were not be able to be there at that lame launch, because we are hard at work improving Doom for hardware that matters, instead of a insignificant mainstream product."

    You mean for consoles, and it just happens that consoles have AMD GCN architecture and all the goodies like Async and intrisic shaders.
    Actually they can sell the PS with just one of the chips and offer the other as an upgrade.  AH remember the days when the console came with 2 controllers and memory to save games.  They even came with the adapter so you could hook it up to the television.  Then Playstation came out and you only got 1 controller then had to pay for memory and pay to have something hook it up to the television.  This will give them another excuse to sell an addon.
    Are you onto something or just on something?
  • MikehaMikeha Member EpicPosts: 9,196
    Ridelynn said:
    Recore said:
    Clock speed doesn't tell the whole story though.

    Otherwise, we'd all be using AMD FX-9590 CPU's rocking 5Ghz stock.

    It is not supposed to tell any story. 

    I am just posting the reported clock speed of the card. 
  • QuizzicalQuizzical Member LegendaryPosts: 25,355
    edited June 2016
    Malabooga said:
    acidblood said:
    filmoret said:
    Quizzical said:
    Malabooga said:
    filmoret said:
    Yea IDK exactly what is wrong but the image on the left seems like they have foilage turned down.  Which is where you can really see the difference in quality.  Another thing I dont understand is with dual cards they only needed to run at 58%.  Why not just run 1 card at 98% and it would match the 1080 which is also running at 98%?
    Well igues point they wanted to make was that you can get same performance for less with 2x480. Just look how NVidia marketed 1080 as "2xGTX980" and that was "bad" as GTX980 costs 450$ and 1080 is 599/699$

    But now AMD is showing that same GTX980 performance for 200$/400$. Also puts 1070 in the spotlight - 1070=400$ and 2x480 as fast as 1080 also=400$.
    One could argue that Nvidia was saying "buy one GTX 1080 instead of two GTX 980s", while AMD was saying, "buy two RX 480s instead of one GTX 1080".  Advice of "buy one card instead of two" is not equivalent to "buy two cards instead of one".  One faster cards is preferable to two slower cards.
    But if they get things working properly they could offer a third card. To make it even faster.  So people who cannot afford the 700$ card can just buy one at a time and eventually end up with the equlivant of something much better.  Then I guess Nvidia could do the same thing and you can buy two of the 700$ cards.  Man this is giving me a headache now...

    What I'm thinking is you get the two cards and later when another card comes out you can simply upgrade one of them.  So each upgrade will only cost you about 200$ instead of the 700$ for each Nvidia upgrade.  So with each upgrade you replace the oldest card and you end up with two gens of cards but it ends up being just as fast.  Then gain this probably doesn't work unless they get the interfacing drivers for such a thing.
    Generally doesn't work like that, as in you need two of the same (ideally identical) cards to use them in SLI / XF. Not sure if it's still an option, but SLI did have a thing where you could run one for physX and the other for rendering; I ran that setup for a while, but honestly the benefit was pretty small.

    Not saying the buy one card now and one later is a bad option (have done it in the past), but the other thing to consider is the size of the card and support from the motherboard / case / power supply. For example, technically I can fit 2 full size graphics cards in my case, and my MB / PSU is compatible, but it would mean having to take out a hard drive and blocking the 1x slot... so a single card solution is a better option in my case.

    Actually with DX12 multiadapter you can use any combination of GPUs. Ironically it turns out that AMD as master card and NVidia as slave card is best possible option in terms of perormance ;). Also, 2 cards of equal speed are recommended as slower card acts as anchor for faster (so in GTX960 and R9 390 setup 960 would drag performance to its level)

    Also one amusing thing about it is that 2 NVidia cards dont require SLI bridge to work in DX12 multiadapter which just means SLI bridges are utter nonsense.

    And, of course, devs need to support it, just as CF/SLI

    just google "Ashes of the Singularity DX12 multi adapter"
    I haven't looked into it in a while, but Nvidia used to require two of the same GPU for SLI.  AMD would allow two slightly different GPUs based on the same GPU for CrossFire, by running both of them at the specs of the slower GPU.  I don't know if that has changed in the last several years.

    DirectX 12 makes it possible to have multi-GPU scaling with wildly different GPUs if the game supports it.  But this means the game has to put a bunch of custom code in to support it.  Even for a developer who is inclined to do this, what makes sense to do varies by game.  Not by game engine, mind you, but by game--and different games with the same engine may need to take very different approaches to get optimal use of multiple GPUs.  I don't expect that to ever be common.

    In contrast, CrossFire and SLI are handled by driver magic.  Games don't have to do anything in particular to support them beyond refraining from doing certain things that will break them.  Per-game fixes are the responsibility of the people who write the drivers.
  • filmoretfilmoret Member EpicPosts: 4,906
    Quizzical said:
    Malabooga said:
    acidblood said:
    filmoret said:
    Quizzical said:
    Malabooga said:
    filmoret said:
    Yea IDK exactly what is wrong but the image on the left seems like they have foilage turned down.  Which is where you can really see the difference in quality.  Another thing I dont understand is with dual cards they only needed to run at 58%.  Why not just run 1 card at 98% and it would match the 1080 which is also running at 98%?
    Well igues point they wanted to make was that you can get same performance for less with 2x480. Just look how NVidia marketed 1080 as "2xGTX980" and that was "bad" as GTX980 costs 450$ and 1080 is 599/699$

    But now AMD is showing that same GTX980 performance for 200$/400$. Also puts 1070 in the spotlight - 1070=400$ and 2x480 as fast as 1080 also=400$.
    One could argue that Nvidia was saying "buy one GTX 1080 instead of two GTX 980s", while AMD was saying, "buy two RX 480s instead of one GTX 1080".  Advice of "buy one card instead of two" is not equivalent to "buy two cards instead of one".  One faster cards is preferable to two slower cards.
    But if they get things working properly they could offer a third card. To make it even faster.  So people who cannot afford the 700$ card can just buy one at a time and eventually end up with the equlivant of something much better.  Then I guess Nvidia could do the same thing and you can buy two of the 700$ cards.  Man this is giving me a headache now...

    What I'm thinking is you get the two cards and later when another card comes out you can simply upgrade one of them.  So each upgrade will only cost you about 200$ instead of the 700$ for each Nvidia upgrade.  So with each upgrade you replace the oldest card and you end up with two gens of cards but it ends up being just as fast.  Then gain this probably doesn't work unless they get the interfacing drivers for such a thing.
    Generally doesn't work like that, as in you need two of the same (ideally identical) cards to use them in SLI / XF. Not sure if it's still an option, but SLI did have a thing where you could run one for physX and the other for rendering; I ran that setup for a while, but honestly the benefit was pretty small.

    Not saying the buy one card now and one later is a bad option (have done it in the past), but the other thing to consider is the size of the card and support from the motherboard / case / power supply. For example, technically I can fit 2 full size graphics cards in my case, and my MB / PSU is compatible, but it would mean having to take out a hard drive and blocking the 1x slot... so a single card solution is a better option in my case.

    Actually with DX12 multiadapter you can use any combination of GPUs. Ironically it turns out that AMD as master card and NVidia as slave card is best possible option in terms of perormance ;). Also, 2 cards of equal speed are recommended as slower card acts as anchor for faster (so in GTX960 and R9 390 setup 960 would drag performance to its level)

    Also one amusing thing about it is that 2 NVidia cards dont require SLI bridge to work in DX12 multiadapter which just means SLI bridges are utter nonsense.

    And, of course, devs need to support it, just as CF/SLI

    just google "Ashes of the Singularity DX12 multi adapter"
    I haven't looked into it in a while, but Nvidia used to require two of the same GPU for SLI.  AMD would allow two slightly different GPUs based on the same GPU for CrossFire, by running both of them at the specs of the slower GPU.  I don't know if that has changed in the last several years.

    DirectX 12 makes it possible to have multi-GPU scaling with wildly different GPUs if the game supports it.  But this means the game has to put a bunch of custom code in to support it.  Even for a developer who is inclined to do this, what makes sense to do varies by game.  Not by game engine, mind you, but by game--and different games with the same engine may need to take very different approaches to get optimal use of multiple GPUs.  I don't expect that to ever be common.

    In contrast, CrossFire and SLI are handled by driver magic.  Games don't have to do anything in particular to support them beyond refraining from doing certain things that will break them.  Per-game fixes are the responsibility of the people who write the drivers.
    Is there no way to make a universal program that does the work for the games?  So all the games would need to do is write a certain language and some program like dx12 would be responsible for adapting it to all the gpu's.  Of course all the games would then be required to use the same code.  But the program could get a share and make a fortune if its even possible.
    Are you onto something or just on something?
  • QuizzicalQuizzical Member LegendaryPosts: 25,355
    filmoret said:
    Quizzical said:
    Malabooga said:
    acidblood said:
    filmoret said:
    Quizzical said:
    Malabooga said:
    filmoret said:
    Yea IDK exactly what is wrong but the image on the left seems like they have foilage turned down.  Which is where you can really see the difference in quality.  Another thing I dont understand is with dual cards they only needed to run at 58%.  Why not just run 1 card at 98% and it would match the 1080 which is also running at 98%?
    Well igues point they wanted to make was that you can get same performance for less with 2x480. Just look how NVidia marketed 1080 as "2xGTX980" and that was "bad" as GTX980 costs 450$ and 1080 is 599/699$

    But now AMD is showing that same GTX980 performance for 200$/400$. Also puts 1070 in the spotlight - 1070=400$ and 2x480 as fast as 1080 also=400$.
    One could argue that Nvidia was saying "buy one GTX 1080 instead of two GTX 980s", while AMD was saying, "buy two RX 480s instead of one GTX 1080".  Advice of "buy one card instead of two" is not equivalent to "buy two cards instead of one".  One faster cards is preferable to two slower cards.
    But if they get things working properly they could offer a third card. To make it even faster.  So people who cannot afford the 700$ card can just buy one at a time and eventually end up with the equlivant of something much better.  Then I guess Nvidia could do the same thing and you can buy two of the 700$ cards.  Man this is giving me a headache now...

    What I'm thinking is you get the two cards and later when another card comes out you can simply upgrade one of them.  So each upgrade will only cost you about 200$ instead of the 700$ for each Nvidia upgrade.  So with each upgrade you replace the oldest card and you end up with two gens of cards but it ends up being just as fast.  Then gain this probably doesn't work unless they get the interfacing drivers for such a thing.
    Generally doesn't work like that, as in you need two of the same (ideally identical) cards to use them in SLI / XF. Not sure if it's still an option, but SLI did have a thing where you could run one for physX and the other for rendering; I ran that setup for a while, but honestly the benefit was pretty small.

    Not saying the buy one card now and one later is a bad option (have done it in the past), but the other thing to consider is the size of the card and support from the motherboard / case / power supply. For example, technically I can fit 2 full size graphics cards in my case, and my MB / PSU is compatible, but it would mean having to take out a hard drive and blocking the 1x slot... so a single card solution is a better option in my case.

    Actually with DX12 multiadapter you can use any combination of GPUs. Ironically it turns out that AMD as master card and NVidia as slave card is best possible option in terms of perormance ;). Also, 2 cards of equal speed are recommended as slower card acts as anchor for faster (so in GTX960 and R9 390 setup 960 would drag performance to its level)

    Also one amusing thing about it is that 2 NVidia cards dont require SLI bridge to work in DX12 multiadapter which just means SLI bridges are utter nonsense.

    And, of course, devs need to support it, just as CF/SLI

    just google "Ashes of the Singularity DX12 multi adapter"
    I haven't looked into it in a while, but Nvidia used to require two of the same GPU for SLI.  AMD would allow two slightly different GPUs based on the same GPU for CrossFire, by running both of them at the specs of the slower GPU.  I don't know if that has changed in the last several years.

    DirectX 12 makes it possible to have multi-GPU scaling with wildly different GPUs if the game supports it.  But this means the game has to put a bunch of custom code in to support it.  Even for a developer who is inclined to do this, what makes sense to do varies by game.  Not by game engine, mind you, but by game--and different games with the same engine may need to take very different approaches to get optimal use of multiple GPUs.  I don't expect that to ever be common.

    In contrast, CrossFire and SLI are handled by driver magic.  Games don't have to do anything in particular to support them beyond refraining from doing certain things that will break them.  Per-game fixes are the responsibility of the people who write the drivers.
    Is there no way to make a universal program that does the work for the games?  So all the games would need to do is write a certain language and some program like dx12 would be responsible for adapting it to all the gpu's.  Of course all the games would then be required to use the same code.  But the program could get a share and make a fortune if its even possible.
    What do you think CrossFire and SLI are, if not ways to automatically handle multi-GPU scaling?  They have a number of advantages:

    1)  Alternate frame rendering eliminates the need for potentially arbitrarily much communication across GPUs within a frame.
    2)  The ability to implement them in drivers means you can do a lot more than an external program.  For example, if they thought it was useful, they could modify the shader compilers.  It also means no need to wait for APIs that do what you want.
    3)  The opportunity to place arbitrary hardware restrictions on what is allowed, so that you only have to handle the simplest cases of hardware configurations, not arbitrarily weird ones.

    And they still often give negative frame rate scaling.
  • Thomas2006Thomas2006 Member RarePosts: 1,152
    Everyone keeps bringing up DirectX12 Multiadapter but lets all be honest. We are more then likely 2-3 years away from the major engines having support for dx12 multiadapter. And by then all these cards coming out now will be mid to low end cards. Microsoft will be pushing DX12.2 or DX13 at us at that stage.

    So having a card ride on its ability to scale with multiadapter support is wasting time.

    So far we have one game AoS that supports it and the results vary greatly depending on so many factors that its crazy to think about.

    Epic with UE4 has not supported multiadapter yet because there defered rendering thread just does not lend itself to multiadapter support. If a forward render it is easy to toss something like multiadapter in but a defered render that both Unity and UE4 uses is a different story. This is why I said I doubt you are going to see large support for multiadapter support anytime soon. Both Unity and Epic seem uninterested in making there engines work with it (for whatever reasons they have) and thus a large portion of the gaming market will not pick it up.
  • MalaboogaMalabooga Member UncommonPosts: 2,977
    edited June 2016
    Quizzical said:
    What do you think CrossFire and SLI are, if not ways to automatically handle multi-GPU scaling?  They have a number of advantages:

    1)  Alternate frame rendering eliminates the need for potentially arbitrarily much communication across GPUs within a frame.
    2)  The ability to implement them in drivers means you can do a lot more than an external program.  For example, if they thought it was useful, they could modify the shader compilers.  It also means no need to wait for APIs that do what you want.
    3)  The opportunity to place arbitrary hardware restrictions on what is allowed, so that you only have to handle the simplest cases of hardware configurations, not arbitrarily weird ones.

    And they still often give negative frame rate scaling.
    Plenty of games that dont suppoer CF/SLI so nothing is automatic.

    1. AFR is largely inferior to SFR. In fact one big reason for DX12 multiadapter is to ditch AFR.

    2. Thats really a minor thing. And with low level APIs....game has to work without that.

    3. You can place arbitrary restrctions anyway if you want

    And thas why it needs to go forward because as with CPUs, its pretty much the future. Both, multiple GPUs on single PCB and multiple PCBs.

    benefits ae obvious:

    AMD can offer GTX1080 performance for 30% less money. Or 25% more performance for same money in case of GTX1070. And thats only with 60-70% scaling.
Sign In or Register to comment.