Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!

Initial Mantle thoughts are in

RidelynnRidelynn Member EpicPosts: 7,383

http://hardocp.com/article/2014/02/03/amd_mantle_performance_preview_in_battlefield_4/1

Now this is just one game, on beta drivers, and this review just has it one one system. But the forums there have a lot of users that have informally tested, and the article references those to some degree.

Basically, Mantle doesn't magically increase your GPU performance. It does help alleviate CPU bottlenecks.

High end systems with very fast CPU's typically aren't very CPU bottlenecked in the first place - so they won't see a lot (and in HardOCP's test rig they didn't see a huge gain) - although the ~15% they did see is impressive in and of itself.

Where you really see CPU bottlenecks are in lower-end CPU systems trying to push higher resolutions. Such as APU's. Particularly APU's in systems trying to drive 1080p. Like in consoles. PCs running similar hardware configurations (low end CPUs and APUs) were reporting upwards of 55% gains in framerates.

Sounds like a home run for AMD, given that it targets their bread and butter hardware. Not exactly a home run for most PC gaming enthusiasts, but even 10-15% is equivalent to bumping up a hardware tier (or a new generation).

Comments

  • TheLizardbonesTheLizardbones Member CommonPosts: 10,910

    Too bad it isn't something that can get compiled into the driver, so any game using AMD's hardware gets a boost.  If it does offer significant performance boosts on the APU side of things, I can see a lot of console developers using it though.  Once they are using it for the consoles, there's no reason to not include it on the PC side of things, so that's cool too.  I think it would be awesome to be able to build a $400 to $600 game system using APUs and get really nice frame rates and graphics.  PC gaming is the one sector of the PC market that's actually growing, and finding a way to lower the entry level price and/or increase the entry level performance is only going to help.

     

    I can not remember winning or losing a single debate on the internet.

  • QuizzicalQuizzical Member LegendaryPosts: 25,355
    Originally posted by Ridelynn

    Where you really see CPU bottlenecks are in lower-end CPU systems trying to push higher resolutions.

    You have that backwards.  You're more likely to see a CPU bottleneck at lower monitor resolutions, not higher.  Higher resolutions add a lot of GPU work, but all that they really do CPU side is not let you cull so many things before passing them along to the GPU.  Extra monitor resolution isn't pure GPU load the way that some forms of anti-aliasing are, but it's much heavier on the GPU than the CPU.

    Of course you can improve performance by optimizing everything for a particular architecture rather than writing more generic code that runs on everything.  You can always do that.  But there's no real programming reason to do it for one architecture while ignoring all others, and you don't do it for all architectures because it's a nightmare to try to debug and maintain so many independent code paths.

  • ghstwolfghstwolf Member Posts: 386
    Originally posted by Quizzical

    Of course you can improve performance by optimizing everything for a particular architecture rather than writing more generic code that runs on everything.  You can always do that.  But there's no real programming reason to do it for one architecture while ignoring all others, and you don't do it for all architectures because it's a nightmare to try to debug and maintain so many independent code paths.

    Usually I'd agree but,  we would have to ignore the console influence.

    With so many games designed console first, PC port after it wouldn't make much sense to strip mantle out.  This is especially true where programing for the 2 consoles (as I understand it) is so close to already being PC ready.

  • RidelynnRidelynn Member EpicPosts: 7,383

    Well Microsoft has said they won't support Mantle on the XBone - I don't know that they will stick to it, or at a minimum, build in similar capability to DirectX, if it has such significant performance improvements that it makes a huge disparity with the PS4.

  • RidelynnRidelynn Member EpicPosts: 7,383


    Originally posted by Quizzical
    Originally posted by Ridelynn Where you really see CPU bottlenecks are in lower-end CPU systems trying to push higher resolutions.
    You have that backwards.  You're more likely to see a CPU bottleneck at lower monitor resolutions, not higher.  Higher resolutions add a lot of GPU work, but all that they really do CPU side is not let you cull so many things before passing them along to the GPU.  Extra monitor resolution isn't pure GPU load the way that some forms of anti-aliasing are, but it's much heavier on the GPU than the CPU.

    No, I stated what I intended to state.

    CPU bottlenecking is easier to spot at lower resolutions, I agree; but you, yourself, say that extra resolution adds to the CPU burden - it may be disproportionate, but if you have sufficient GPU capability in the first place, and your adding more burden to an already over-burdened CPU, then you've exacerbated the problem.

    And that's what Mantle is supposed to help alleviate - CPU burden by shifting even more to the GPU and making that disporortionality even larger on the basis that the GPU is still underutilized because the CPU is the major bottleneck.

  • TheLizardbonesTheLizardbones Member CommonPosts: 10,910
    Originally posted by Ridelynn

    Well Microsoft has said they won't support Mantle on the XBone - I don't know that they will stick to it, or at a minimum, build in similar capability to DirectX, if it has such significant performance improvements that it makes a huge disparity with the PS4.

     

    Sony would probably have to get behind Mantle and really push it.  If developers can get a game running on the XB1 at acceptable resolutions and frame rates, then they'll be able to do it on the PS4 without Mantle.  Mantle isn't going to give them more features, just faster processing, which isn't anything to sneeze at, but what they already have is "enough" to do the job.

     

    I could definitely see PS4 exclusives using it though.  They'll already have more headroom in the gpu, so more headroom overall is something they would want to take advantage of to really push what the PS4 can do and show it off.

     

    I can not remember winning or losing a single debate on the internet.

  • RidelynnRidelynn Member EpicPosts: 7,383


    Originally posted by cura
    Im no longer into programming and am not following all this stuff but doesnt consoles already have this low level acees to the hardware mantle offers?

    The latest consoles (PS4/XBone) have hardware that can take advantage of it, yes.

    Mantle is the software layer specific to that hardware that lets developers takes advantage of it.

  • WizardryWizardry Member LegendaryPosts: 19,332

    DX11 works for all cards,Mantle is an api specific to AMD and sounds like ATI cards or specific to the driver enhanced for a particular game.What i have seen over the years is drivers get enhanced only for the very popular,with no support for other games.

    I remember when ATI started up,they couldn't make a driver for the life of them and now they are starting all over with a new api,i expect problems.

    I have jumped back n forth for years and i have realized that Intel based is always better than AMD  based,the only thing you get out of AMD is cheaper prices.

    I heard Microsoft is no longer advancing Direct X,so maybe AMD say it a good time to jump in.

    Personally i would not go out of my way to buy this product/chip,i plan n sticking to all things Intel for a long time now.

    I fail to see how this can do anything at all to help lower end machines,you still need to communicate between the cpu and gpu and usually devs are so weak to utilize too much cpu and not gpu.If your overall system does not have the bandwidth,this api is not going to magically create it,so your still bottle necked.

    It also says it is to help lower end machines?Well it ONLY works on the latest hardware so i highly see that as being made for lower end machines.

    Never forget 3 mile Island and never trust a government official or company spokesman.

  • QuizzicalQuizzical Member LegendaryPosts: 25,355
    Originally posted by ghstwolf
    Originally posted by Quizzical

    Of course you can improve performance by optimizing everything for a particular architecture rather than writing more generic code that runs on everything.  You can always do that.  But there's no real programming reason to do it for one architecture while ignoring all others, and you don't do it for all architectures because it's a nightmare to try to debug and maintain so many independent code paths.

    Usually I'd agree but,  we would have to ignore the console influence.

    With so many games designed console first, PC port after it wouldn't make much sense to strip mantle out.  This is especially true where programing for the 2 consoles (as I understand it) is so close to already being PC ready.

    Mantle is not available for consoles.  Consoles have their own APIs.

  • RidelynnRidelynn Member EpicPosts: 7,383

    Mantle is specific to AMD-based GPU's that use GCN architecture (basically 7770 series and later, and Jaguar/Kaveri and later APUs)

    Microsoft is advancing DirectX - I don't know where you heard that at.

    In an APU - the CPU and GPU are in the same die, so there is a good deal of communication bandwidth available there. Even in a typical PCI 2.0/3.0 setup, there is sufficient bandwidth at single monitor resolutions such that it's not a bottleneck, your biggest bottleneck will be either GPU or CPU performance. Mantle helps in situations where you are CPU-bottlenecked by helping to offload to the GPU. This won't apply to most enthusiast-class gaming PC's - they tend to be GPU bottlenecked or fairly well balanced. But in budget PCs, this is a big deal, and that applies to the current generation of consoles (should they choose to utilize Mantle)


    Originally posted by Wizardry
    DX11 works for all cards,Mantle is an api specific to AMD and sounds like ATI cards or specific to the driver enhanced for a particular game.What i have seen over the years is drivers get enhanced only for the very popular,with no support for other games.I remember when ATI started up,they couldn't make a driver for the life of them and now they are starting all over with a new api,i expect problems.I have jumped back n forth for years and i have realized that Intel based is always better than AMD  based,the only thing you get out of AMD is cheaper prices.I heard Microsoft is no longer advancing Direct X,so maybe AMD say it a good time to jump in.I fail to see how this can do anything at all to help lower end machines,you still need to communicate between the cpu and gpu and usually devs are so weak to utilize too much cpu and not gpu.If your overall system does not have the bandwidth,this api is not going to magically create it,so your still bottle necked.

  • QuizzicalQuizzical Member LegendaryPosts: 25,355
    Originally posted by Ridelynn

    In an APU - the CPU and GPU are in the same die, so there is a good deal of communication bandwidth available there. Even in a typical PCI 2.0/3.0 setup, there is sufficient bandwidth at single monitor resolutions such that it's not a bottleneck, your biggest bottleneck will be either GPU or CPU performance. Mantle helps in situations where you are CPU-bottlenecked by helping to offload to the GPU. This won't apply to most enthusiast-class gaming PC's - they tend to be GPU bottlenecked or fairly well balanced. But in budget PCs, this is a big deal, and that applies to the current generation of consoles (should they choose to utilize Mantle)

    How much PCI Express bandwidth you need depends very strongly on what you're doing.  If you're doing a lot of per-frame physics computations and sending vertex data to the video card to draw it immediately rather than buffering and reusing it, that's going to take a lot of PCI Express bandwidth.  If you're constantly having to load new textures, that's going to take a lot PCI Express bandwidth.  If your big loads only happen at loading screens and otherwise you're just passing along uniforms and commands to switch textures and so forth, then PCI Express 2.0 x1 bandwidth might be enough for you.

    But the other thing about PCI Express bandwidth as a bottleneck is that use of it will be very bursty.  If 98% of the time, you're not using much bandwidth, but the other 2% of the time, you're passing textures and such to the video card, PCI Express bandwidth can easily be a bottleneck in that 2% of the time.  Furthermore, that 2% of the time can easily be clustered in a relatively small fraction of your frames, so that a number of frames take several milliseconds longer than they otherwise would.  More PCI Express bandwidth might mean that those frames that involve a lot of data loading take an extra 3 ms instead of an extra 5 ms, resulting in a smoother frame rate.

    But the bigger deal of APUs and avoiding PCI Express bottlenecks is not that it can increase performance a bit or make frame rates a little smoother in existing games.  Rather, it's that you can do things that you wouldn't dare try with a discrete video card because it would cause a PCI Express bottleneck.  PCs are a long way off from trying this, but we'll probably see some PS4 games do some heavy physics effects that they won't dare try to port to PC.  Unified memory means that you can take a GPU-friendly chunk of code that is big enough to overwhelm a CPU but needs to be done before the normal graphics pipeline, have the GPU do it, then have the CPU do something with the results, all without having to worry about passing data back and forth between them.

  • drbaltazardrbaltazar Member UncommonPosts: 7,856
    Math 101: I have a i5 2500k .at worst its set at 7.9 microsecond .and one interrupt per socket .nvm this last part .so how much data you would need you think to max this capability set-up . now imagine this : this is the capability the is syntheticly show off .is it the real capability of the CPU?hell no .remember you have only one interrupt so all buffer ,cache and delay setting were set at those value because your is is set at one interrupt . imagine if you were to set interrupt (MSI/msix) to one per physical core instead of the default one per physical socket(consumer only , sorry industry) would the delay cache and buffer value still be accurate? Probably would not even be needed . get my drift ? Pretty much all issue GPU maker found aren't at their end . the issue are at is end .why do you think Intel and its ally released invariant tsc ? Ms still had no fix that wouldn't crash games or other thing . now ?nothing in is can keep up with Intel solution ! Don't believe me? Its easy to check ! Copy something from one memory bank to a ramdisk . and use ms counter ! You notice it look like it hung ! It didn't ! Intel lost it in the dust (os ) and this is what the problem is .it affect everything .its not just the counter .its all the os various delay cache buffer etc . sorry if I miss some stuff.its basicly bufferbloat but in computer instead of internet (highly likely related i bet)
Sign In or Register to comment.