Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!

The future of multi-GPU rigs

QuizzicalQuizzical Member LegendaryPosts: 25,351

The immediate impetus for this post was this article:

http://www.tomshardware.com/news/microsoft-directx12-amd-nvidia,28606.html

You can read it if you like, but the article gave the impression that the author had never done graphics programming and a lot of what he said is just guessing.  So most of what I'm going to say isn't from that article.

The article claims that DirectX 12 will allow games to use both AMD and Nvidia cards together to share the rendering duties.  I not only find this highly plausible, but would be surprised if isn't true.  And I expect the same for OpenGL 5.

There is, in fact, already an API out there that lets you use AMD and Nvidia video cards together in the same program to do stuff.  Intel graphics, too.  It's called OpenCL.  But it's not built for graphics.  It is, however, pretty obvious how to bring the same functionality from OpenCL into OpenGL, which is why I'd be surprised if it doesn't happen with OpenGL 5.

The way that OpenGL 4.x works today is that you have some primary GPU and you tell it, buffer this texture, buffer this vertex array, compile this shader, buffer this uniform, run this draw call, etc.  DirectX does basically the same thing.  SLI and CrossFire are handled purely by driver magic, where the video drivers take the API calls and decide which GPU it needs to be passed to--or in some cases, both.

The way that OpenCL works, in contrast, is that you can ask for all of the OpenCL-compliant devices in the machine.  In a consumer rig, this would typically mean the CPU and whatever GPUs you have--both discrete and integrated if you have both.  And then you can tell it, run this API command on this device, run that API command on that device, and so forth.  OpenCL lacks the graphics-specific API commands of OpenGL, so if you want to use it for graphics, you're going to have to roll your own on a lot of things.  Some things such as hardware tessellation are completely unavailable.

So what happens if you mix the OpenCL queue system that lets you send different API calls to different devices with the graphics-specific features of OpenGL?  That's what I expect OpenGL 5 to demonstrate, and it sounds like DirectX 12 will, too.

Does this mean that adding a second GPU will double your performance?  No, not at all.  If you have two identical GPUs, it's not automatic that the new options of DirectX 12 and/or OpenGL 5 will actually give you better ways to use both GPUs than traditional SLI/CrossFire.  But it would give you a lot of options to take advantage of two different GPUs in a system, such as the integrated graphics that came with the CPU as well as a discrete card.  Or, if it becomes popular, the newer video card you just bought, together with the older card it replaced.

However, this is going to be very game-dependent.  If a game developer doesn't explicitly implement ways to take advantage of multiple GPUs, then the game won't take advantage of multiple GPUs at all.  This isn't something that can be done well by driver magic.  Remember the Lucid Hydra, that let you have two different GPUs in the same machine and use them together to render a game?  Sometimes using both GPUs was slower than just using one.

But there are a lot of things that a game developer could implement.  Probably the simplest is having one GPU render the game while the other does something else.  Some games have already done this with physics, mainly using Nvidia's PhysX.  That required both GPUs to be Nvidia, however, which basically killed it.  It might have had a chance if you could do physics on the integrated graphics while rendering the game on a discrete card, but Nvidia has no x86 license, and hence no high-performance integrated graphics for desktops or laptops.

In the game I've been working on, I've long pondered using OpenCL to render textures when they're embarrassingly parallel.  That, like GPU physics, could be done today, though.  It doesn't require new APIs.  And most (meaning, nearly all) games load textures from storage rather than generating them from formulas.

So how would two GPUs work together to render a frame?  One alternative is split frame rendering.  One GPU handles the left side of the screen, while the other does the right.  Once they've both rendered their portion, you copy one GPU's framebuffer to the other and display it.

This notably wouldn't require comparably powered GPUs.  The player could be allowed to mark where the line between the GPUs would be.  If you have, for example, a GeForce GTX 980 and also a GeForce GTX 580, you could have the former render 70% of the screen and the latter 30%.

Split frame rendering isn't new, though--and AMD and Nvidia moved away from it as the way to implement CrossFire and SLI for good reasons.  A lot of the work up to rasterization would have to be duplicated on both GPUs.  An intelligent programmer designing his game for this could filter a lot of objects as being off of the portion of the screen that that GPU will render, but anything near or on the boundary line would have to be sent to both.  This can't be done purely by driver magic, which is why it isn't how CrossFire or SLI work today.  Split frame rendering would also disable any features handled by driver magic that aren't common to both of the cards.  If the screen is split in half and the two halves do anti-aliasing differently, it's likely going to look bad.

And while the article is hopeful that this will free up video memory, split screen rendering doesn't help much.  All of the textures and vertex data would have to be on both cards, as the camera can rotate quickly.  If you try to only buffer things for one side or the other, you're probably going to get hitching when it tries to switch sides.

Another approach would be to have one GPU handle the near objects, while the other handles the far objects.  This would allow for a lot of texture and vertex data to only be on one video card, which does free up video memory.  It doesn't necessarily free up all that much on one of the cards, however, as the "far" card has to have nearly everything buffered.  But it could be useful if you know that one card is going to have a lot more video memory than the other.

This approach has drawbacks of its own, though.  Merging the frame buffers necessarily means that one GPU has to send its entire framebuffer for the entire screen to the other, not just part of it.  Furthermore, the depth buffer needs to be sent, too.  You could compress the depth buffer to a single bit per pixel before sending it, as the "back" GPU just needs to know if the "front" one drew anything at all, but compressing and decompressing the data is a performance hit, too.  Furthermore, this would make it very hard to balance the load between the GPUs.

Yet another approach would be to pick objects and arbitrarily assign them to GPUs, so that each GPU gets some fixed percentage of things to draw.  This would balance the load between GPUs pretty well, and once both have rendered their frame, you have to send the entire frame buffer to the other GPU for a final draw call to merge them.  This would allow allow most texture and vertex data to only be present on one GPU, as the same GPU could render the same objects every frame.

But this would require sending the entire depth buffer, in addition to the frame buffer, while making it impossible to compress either.  That's a lot of PCI Express bandwidth.  Furthermore, if there are post-processing effects, you'd have to wait for both GPUs to finish the "main" part of the frame, then one sends its data to the other, and one does post-processing completely on its own.  This would break a lot of anti-aliasing methods, too; post-processing forms of anti-aliasing would still be available, but one GPU would have to handle it entirely on its own.

The final approach that I'd like to mention is having one GPU render the "main" frame, while the other does only post-processing effects.  If a game is heavy enough on post-processing effects, this could be a decent balance between the GPUs.  Or if a game is 2/3 main rendering and 1/3 post-processing, a stronger GPU could handle one while a weaker one handles the other.  This would also allow a lot of stuff done by driver-magic to work even if only one GPU supports it.

However, because you'd effectively have different GPUs working on different frames at the same time, you're going to increase the time it takes to render a frame slightly.  Like CrossFire/SLI setups, you'd increase your frame rates, but your experience wouldn't be as good as a single faster card that can provide the higher frame rate all by itself.  It would mostly avoid the frame timing problems of CrossFire and SLI, though.

It would also require sending the entire frame buffer from one GPU to the other.  Whether the depth buffer would be necessary depends on what you're doing with post-processing.

So game designers are soon going to have a lot of options for how to take advantage of multiple GPUs.  Which do I expect games to use?  For the most part, none of them.  Few games offer graphical features that most users will never be able to take advantage of.  And the optimal approach for a multi-GPU rig with two comparable GPUs is very different from the optimal approach where one GPU is much stronger than the other.

Comments

  • 13lake13lake Member UncommonPosts: 719
    The biggest boon out of DX12 and consequently OpenGL next is gonna be the ability to use all the video ram from all the crossfire/sli connected graphic cards in a computer.
  • RidelynnRidelynn Member EpicPosts: 7,383

    I would also be very surprised if a certain Green company doesn't do something to explicitly disable or degrade multi-vendor GPU support from occuring.

  • HrimnirHrimnir Member RarePosts: 2,415

    The part of the article that worries me significantly is this:

    "There is a catch, however. Lots of the optimization work for the spreading of workloads is left to the developers – the game studios. The same went for older APIs, though, and DirectX 12 is intended to be much friendlier. For advanced uses it may be a bit tricky, but according to the source, implementing the SFR should be a relatively simple and painless process for most developers."

     

    MS has already stated that because the complexity of coding for DX12 is so significantly higher than DX11.x and prior, (and especially because of consoles, so many game designers/coders are well versed in DX9 and nothing else), that they're banking on the fact that the "talented" people will be the ones coding game engines, say "Unreal Engine 5" or some hypothetical like that, and the "less talented" people who are generally employed by game studios won't basically be able to muck things up with bad half assed code and instead just utilize a premade engine by the "talented" people.

     

    Overall i am very excited for DX12, especially since its not proprietary (yes i know ive bitched about mantle and i don't mind gsync, but thats a little different).

     

     

    "The surest way to corrupt a youth is to instruct him to hold in higher esteem those who think alike than those who think differently."

    - Friedrich Nietzsche

  • QuizzicalQuizzical Member LegendaryPosts: 25,351
    Originally posted by 13lake
    The biggest boon out of DX12 and consequently OpenGL next is gonna be the ability to use all the video ram from all the crossfire/sli connected graphic cards in a computer.

    You can already use it all today.  The question is, what can you use it for?  If one GPU is trying to fetch textures or vertex data or anything else that comprises most of the space video memory uses from the video memory of the other card, performance is going to suffer severely.  PCI Express just doesn't have the bandwidth to do that intelligently.  It will perform about as well as going to main system memory and for the same reasons.

  • UrobulusUrobulus Member UncommonPosts: 29
    Originally posted by Hrimnir
    Originally posted by Ridelynn

    I would also be very surprised if a certain Green company doesn't do something to explicitly disable or degrade multi-vendor GPU support from occuring.

    [mod edit]

     

    I've been buying ONLY nVidia cards for the past 15 years, and Ride is right... nVidia has been VERY greedy recently (just check  the price of their GPUs since the past 2-3 years) and had very some shady practices...

     

    So yeah point for Ridelynn here sorry...

     

    If it wasn't for their shitty driver I would have thrown my support behind ATi a long time ago just to be able to give the finger to nVidia...

  • QuizzicalQuizzical Member LegendaryPosts: 25,351
    Originally posted by Hrimnir
    Originally posted by Ridelynn

    I would also be very surprised if a certain Green company doesn't do something to explicitly disable or degrade multi-vendor GPU support from occuring.

    [mod edit]

    Multi-vendor GPU support is available in OpenCL today.  Indeed, that's a major selling point of the API:  you write your GPU code once, then you take advantage of everything the system has to offer:  discrete GPUs, multilple CPU sockets, etc.  (The point of OpenCL on a CPU is largely to exploit SSE and AVX style parallel instructions.)  If Nvidia has sabotaged it somehow, no one has noticed.

    With PhysX, you had Nvidia software looking for Nvidia video cards.  Not so with OpenGL or DirectX.  If DirectX 12 says you have to do such and such to support the standard and Nvidia doesn't do it, then AMD would support DirectX 12 and Nvidia wouldn't.  Nvidia could justifiably claim that DirectX 11.1 and 11.2 didn't matter, but not so with DirectX 12.

  • RidelynnRidelynn Member EpicPosts: 7,383

    Not the least of which to mention that OpenCL's biggest "competitor" is CUDA.

  • RidelynnRidelynn Member EpicPosts: 7,383

     


    Originally posted by Hrimnir

    Originally posted by Ridelynn I would also be very surprised if a certain Green company doesn't do something to explicitly disable or degrade multi-vendor GPU support from occuring.
    [mod edit]

     

    You h ave to admit, when it comes to "proprietary" no one does it like nVidia.

    Let me count the ways:

    CUDA
    GSync
    PhysX
    TSXX
    MFAA
    Shield (to a large extent)
    VXGI
    ...and that's just what I can come up with off the top of my head. None of that plays well, if at all, with others.

    I'm not trying to drum up support for AMD, and I'm not trying to say that no company has a right to do anything proprietary, or that every tech has to be "open source".

    I'm just saying that nVidia has a very poor history of playing well in other people's sandboxes. They tend to support their own API/Tech/etc, they defend and develop it until well past a reasonable alternative is available, they very rarely license it out, and I question how good that is for nVidia.

  • HrimnirHrimnir Member RarePosts: 2,415
    Originally posted by Quizzical
    Originally posted by Hrimnir
    Originally posted by Ridelynn

    I would also be very surprised if a certain Green company doesn't do something to explicitly disable or degrade multi-vendor GPU support from occuring.

    Seriously dude?  I think i have an ATi hat in a closet somewhere from way back when, want me to send it to you so you can fanboi it out even harder?

    Multi-vendor GPU support is available in OpenCL today.  Indeed, that's a major selling point of the API:  you write your GPU code once, then you take advantage of everything the system has to offer:  discrete GPUs, multilple CPU sockets, etc.  (The point of OpenCL on a CPU is largely to exploit SSE and AVX style parallel instructions.)  If Nvidia has sabotaged it somehow, no one has noticed.

    With PhysX, you had Nvidia software looking for Nvidia video cards.  Not so with OpenGL or DirectX.  If DirectX 12 says you have to do such and such to support the standard and Nvidia doesn't do it, then AMD would support DirectX 12 and Nvidia wouldn't.  Nvidia could justifiably claim that DirectX 11.1 and 11.2 didn't matter, but not so with DirectX 12.

    Except, just like with OpenGL, historically not a lot of companies include support for it in their engines.

    As for not supporting DX12, yes hypothetically nvidia could refuse to do that, but why?

    Every prominent game and game engine has historically been based primarily off directx, it would be about the stupidest business decision they literally could ever make, period.

    Its not the same as gsync because thats an accessory you utilize in addition to your nvidia card.  Its not like if they said you can ONLY use your nvidia card with ONLY a gsync monitor, you can still use it with any normal monitor, and you can *also* use it for gsync, so its a win win for them on that end.

    Not enabling the hypothetical "dual GPU from different makers" option only hurts them.  In that scenario they've still sold someone at least 1 card, so what benefit is there for them to disable it other than to be spiteful, which is never good business.

    "The surest way to corrupt a youth is to instruct him to hold in higher esteem those who think alike than those who think differently."

    - Friedrich Nietzsche

  • HrimnirHrimnir Member RarePosts: 2,415
    Originally posted by Ridelynn

     


    Originally posted by Hrimnir

    Originally posted by Ridelynn I would also be very surprised if a certain Green company doesn't do something to explicitly disable or degrade multi-vendor GPU support from occuring.
    Seriously dude?  I think i have an ATi hat in a closet somewhere from way back when, want me to send it to you so you can fanboi it out even harder?

     

    You h ave to admit, when it comes to "proprietary" no one does it like nVidia.

    Let me count the ways:

    CUDA
    GSync
    PhysX
    TSXX
    MFAA
    Shield (to a large extent)
    VXGI
    ...and that's just what I can come up with off the top of my head. None of that plays well, if at all, with others.

    I'm not trying to drum up support for AMD, and I'm not trying to say that no company has a right to do anything proprietary, or that every tech has to be "open source".

    I'm just saying that nVidia has a very poor history of playing well in other people's sandboxes. They tend to support their own API/Tech/etc, they defend and develop it until well past a reasonable alternative is available, they very rarely license it out, and I question how good that is for nVidia.

    Bah, CUDA is not a gaming related and is/was actually a BOON for researchers ( i have friends who use them for all manner of things in their jobs)  So thats something that has only been beneficial and wasn't created in some way to screw over ATi

    Gsync was also again, developed before adaptive vsync, so it was just them innovating and finding ways to offer improvements to the experience for their customers, and why shouldn't it be proprietary?  They're not in the business of helping their competitor, this would be no different than BMW developing a proprietary tire to come on their car with Michelin or something like that.

    Physx from memory was not an nvidia thing, nvidia just implemented in in their drivers.  I remember actual physx "cards" coming out well before nvidia implemented it in their drivers.

    PhysX is a proprietary realtime physics engine middleware SDK. PhysX was authored at NovodeX, an ETH Zurich spin-off. In 2004 NovodeX was acquired by Ageia, and in February 2008 Ageia was acquired by Nvidia.[1]

    So, that goes to show, this was just more "smart" business practice, they saw an opportunity to purchase a technology that might pan out to be a benefit to their customers (in this case wasnt).  But again, another example of them NOT doing something specifically for the purpose of hurting ATi.

    TSXX?  I dont even know what that is, can't find anything on google or related to nvidia, so pls elaborate here?

    MFAA - Ati has their own proprietary AA also, cant remember the name, but thats a wash

    Shield, Ati has no equivalent or as far as i know intentions to create an equivalent, so, why does this matter?  IMO this is just nvidia being smart again and offering a product that allows their customers to utilize their existing nvidia product, once again its not like it was developed in response to something ATi did or planned on doing.

    VXGI im not familiar with so i won't comment there.

     

    Overall, im not really seeing the argument here.  With the possible exception of GSYNC nothing they've done has smacked of anything other than intelligent business practices to help leverage their own products.  Companies have been doing this kind of stuff for ages.  (google and Apple for example do this kind of stuff constantly, and they're some of the most beloved companies in the world, which boggles my mind but thats a discussion for another day).

     

    "The surest way to corrupt a youth is to instruct him to hold in higher esteem those who think alike than those who think differently."

    - Friedrich Nietzsche

  • 13lake13lake Member UncommonPosts: 719

    All the underhanded stuff, and disrespecting their customers is obviously working in their favour, Nvidia GPU market share has risen to 76% up from 64.9% last year.

    And that's after their CEO issued a formal letter about 970 fiasco and didn't even apologize. He just used the old Silicon Valley catchphrase : 

    "It's not a bug, it's a feature ! "

    http://www.guru3d.com/news-story/nvidia-ceo-blogs-about-gtx-970.html

    For the record i admire the spin stories from him, they're usually perfect, and exactly what's needed at a given moment, but this time i'm disappointed, i can think of a few ways that could have been worded differently to completely step over and push the fiasco under a rug and spin the story to excite cheers in the comments and not these kind of comments :

    "There was no miscommunication between his wallet and the retailer's listing though."

     

    Oh, and I head the new Titan Z is gonna go for a $4500 this time around, $3000 is for plebs :)

  • RidelynnRidelynn Member EpicPosts: 7,383

    I don't think anyone proposed that nVidia wouldn't support DX12.

    I did offer the opinion that nVidia may not willingly support multi-vendor GPUs (possibly artificially disabling it at the driver level).

    nVidia already artificially disables SLI unless the motherboard manufacturer pays a licensing fee, and nVidia has more strict hardware requirements for allowing SLI to be licensed than AMD does.

    Those two items alone are good reason to question nVidia's willingness to allow it to support/condone/allow other brand GPUs from playing well either. And I wouldn't use the current OpenCL example as proof that they will, necessarily. OpenCL right now is extemely limited in the scope of it's use - something like DX12 support for this would not be, and would gather a whole lot more attention.

    As far as CUDA being a good thing,. or not for gamers - I didn't say it was good or bad. It's just proprietary. Along with everything else on that list. If you want to use CUDA to develop, your doing it on nVidia hardware. Just another example of nVidia's closed door approach. I'm also not saying that is good or bad, just the way it is.

  • Leon1eLeon1e Member UncommonPosts: 791

    Used to have 2 GPU-s 2 years ago but ditched it in favor for a single, more powerful GPU because 2x GPU makes absolutely zero sense right now unless the game has some hardcore SFR renderer implementation (can't think of one right now) and it always seemed as too much of a waste. And yeah unless you are some multi-monitor  UltraHD junkie, but I'm talking about average Full HD or QHD usage. 

    Fundamental changes like this one could make me reconsider having more GPUs. Saw the article earlier last night myself and it got me excited about Dx 12 even more. 

    Thanks for the thorough explanation @Quiz, hopefully it sheds more light to the less tech savvy people. 10/10 Great post! 

  • SupaAPESupaAPE Member Posts: 100

    Holy crap !

     

    I read the whole thread and missed out on the crucial point "The article claims that DirectX 12 will allow games to use both AMD and Nvidia cards together to share the rendering duties."

     

    I'm froathing at the mouth TBH.  The problem is I just have this feeling that this is too GOOD to be true. 

     

    I mean, If you ever heard of or remember the hybrid physx solution for ATi cards, you'll remember that nvidia eventually broke that and stopped it working in newer drivers IIRC. Just saying that judging from their past behavior, this seems surreal. If it does happen, it'll be something I never imagined i'd see image

     

    Edit: Although I think this could work and makes sense from a business perspective. If people are allowed to mix nvidia & Ati, get good performance and possibly the benefits of company specific tech, I see this becoming a thing and every guy that's into hardware  doing it. You will see it all over forums and it will spread like wildfire. Got my fingers crossed :D

  • Leon1eLeon1e Member UncommonPosts: 791
    Originally posted by SupaAPE

    I mean, If you ever heard of or remember the hybrid physx solution for ATi cards, you'll remember that nvidia eventually broke that and stopped it working in newer drivers IIRC. Just saying that judging from their past behavior, this seems surreal. If it does happen, it'll be something I never imagined i'd see image

    If nVidia is to implement Dx12 fully, they won't have the option to say no. The developers will. 

    I mean sure, they probably have the option to gimp it, somehow, but that will lead to faulty Dx 12 driver, which , I hope, would bring their sales down because the consumer is relatively smart if all the reviewers are rising red flags. 

     

    Then again some of you still have GTX 970 and think they have full 4gb VRAM GPU, but that is a lesson for another thread :D

  • ClassicstarClassicstar Member UncommonPosts: 2,697

    Soon i replace my 290x with 390x i don't need xfire.

    No game on market that 290x(i own 2x msi 290x(one lay getting dust because 1card can handle all) already can handle properly.

    Hope to build full AMD system RYZEN/VEGA/AM4!!!

    MB:Asus V De Luxe z77
    CPU:Intell Icore7 3770k
    GPU: AMD Fury X(waiting for BIG VEGA 10 or 11 HBM2?(bit unclear now))
    MEMORY:Corsair PLAT.DDR3 1866MHZ 16GB
    PSU:Corsair AX1200i
    OS:Windows 10 64bit

  • GravargGravarg Member UncommonPosts: 3,424
    I've had ATI, I've had Nvidia...they all do the same stuff really.  I've never seen much difference graphics-wise.  They look the same to me.  The one thing that I like about Nvidia over ATI is when it comes to drivers and support.  My Nvidia computers always auto-update and I only hvae to "ok" the install.  With ATI it never updated and I had to go to their website to update it.  The update size for ATI seemed bigger as well.  As far as performance and dependability they're about the same.  I've only ever had one gfx card ever go out on me.  It was a 10+ year old Geforce 4 lol.  By then it wasn't even my computer anymore, I had already gone through 5 computers lol.
Sign In or Register to comment.