Quantcast

Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!

DX12

RidelynnRidelynn Member EpicPosts: 7,142

http://www.anandtech.com/show/8962/the-directx-12-performance-preview-amd-nvidia-star-swarm

Looks like a big benefit when your game can take advantage of it. The interesting part - looks like until we get much, much faster GPUs (or a much, much faster way of feeding them), around 4 cores is still going to be about the optimum core count based on the GPU bottleneck.

Looks like Mantle just edges out DX12 on this particular benchmark, but it's very close - and both are significantly faster than DX11.

So going full circle with that - does it really surprise anyone that we are not seeing a huge push for faster CPUs?

Comments

  • KiyorisKiyoris Member RarePosts: 2,130
    insane performance gains, windows 10 will be awesome
  • KiyorisKiyoris Member RarePosts: 2,130
    Originally posted by Ridelynn

    So going full circle with that - does it really surprise anyone that we are not seeing a huge push for faster CPUs?

    Making CPU faster has hit a stop for a bit.

    Intel is waiting for IMEC on what to do. IMEC has pushed to 7nm but said that's the end for silicon. They have built chips at sub 7nm with Germanium.

    Intel will bring out more 14nm chips, maybe even 11nm, but then they will have to try Germanium or something else.

    Once IMEC and ASML have decided if Germanium is viable, Intel will start experimenting with Germanium instead of Silicon. David Brunco from Intel was at IMEC a few months ago to see what the Germanium chips can do.

     

    The patents for Germanium sub 7nm chip production were filed in 2013 by IMEC Belgium in coopertaion with Globalfoundries Taiwan, at least 2 names of Intel are under the reference list, including David Brunco from Intel. http://patents.justia.com/patent/8828826

  • KiyorisKiyoris Member RarePosts: 2,130
    Also, while IMEC has made chips of sub-14 nm based on silicon, some argue this can't be done on a mass scale, Intel might jump directly to Germanium for their next iteration of CPU. Going below 14nm with silicon might be too costly.
  • QuizzicalQuizzical Member LegendaryPosts: 22,626

    It's a synthetic corner case, designed to show off what the new API can do well that the old one couldn't.  One could easily have done something analogous showing comparably enormous gains for DirectX 11 over 10, or for DirectX 10 over 9.  It's actually pretty similar to benchmarks that compare PhysX running on an Nvidia GPU versus having stupid code that does the same thing slowly on a CPU running single-threaded and not even using SSE or AVX.  Or, say, this:

    http://www.tomshardware.com/reviews/clarkdale-aes-ni-encryption,2538-7.html

    The slide on the first page pretty much gives it away.  I can believe huge gains in the driver overhead.  But they're also claiming huge gains in the app logic--which neither knows nor cares what API you're using.

    That said, there will be real gains from greatly reducing video driver overhead.  Those gains, incidentally, are also available in OpenGL today via extensions.  No need to wait for Windows 10 or to restrict yourself to Windows 10, or any other sort of Windows, for that matter.

  • zaberfangxzaberfangx Member UncommonPosts: 1,796

    I think glnext that valve been working on, be support more then Dx12 by dev, just that they can push there games on windows and other OS when Dx12 can't, it maybe be faster a little more but can be less supported overtime, with it spending more money trying to port the games around.

     

    As there are a lot more indie games came out strong last year, but I am sure they'll make a move to glnext if become easy to work with, that what going come down too really with less down time anyhow.

     

    But we will see.

     

     

  • CleffyCleffy Member RarePosts: 6,296
    We could see these benefits if developers ever support it. Its 2014, games are still being designed for DX 9.0c on 32-bits. Its an era where 95% of users can play 64-bit games on DX11, but developers still choose to make their games with an older less efficient API.
  • HrimnirHrimnir Member RarePosts: 2,414
    Originally posted by Quizzical

    It's a synthetic corner case, designed to show off what the new API can do well that the old one couldn't.  One could easily have done something analogous showing comparably enormous gains for DirectX 11 over 10, or for DirectX 10 over 9.  It's actually pretty similar to benchmarks that compare PhysX running on an Nvidia GPU versus having stupid code that does the same thing slowly on a CPU running single-threaded and not even using SSE or AVX.  Or, say, this:

    http://www.tomshardware.com/reviews/clarkdale-aes-ni-encryption,2538-7.html

    The slide on the first page pretty much gives it away.  I can believe huge gains in the driver overhead.  But they're also claiming huge gains in the app logic--which neither knows nor cares what API you're using.

    That said, there will be real gains from greatly reducing video driver overhead.  Those gains, incidentally, are also available in OpenGL today via extensions.  No need to wait for Windows 10 or to restrict yourself to Windows 10, or any other sort of Windows, for that matter.

    The only problem with OpenGL is the same problem OpenGL has always had.  Nobody develops "real" games for it.  Until developers start embracing it, DX12 is going to be the way to go.

    "The surest way to corrupt a youth is to instruct him to hold in higher esteem those who think alike than those who think differently."

    - Friedrich Nietzsche

  • zaberfangxzaberfangx Member UncommonPosts: 1,796
    Originally posted by Hrimnir
    Originally posted by Quizzical

    It's a synthetic corner case, designed to show off what the new API can do well that the old one couldn't.  One could easily have done something analogous showing comparably enormous gains for DirectX 11 over 10, or for DirectX 10 over 9.  It's actually pretty similar to benchmarks that compare PhysX running on an Nvidia GPU versus having stupid code that does the same thing slowly on a CPU running single-threaded and not even using SSE or AVX.  Or, say, this:

    http://www.tomshardware.com/reviews/clarkdale-aes-ni-encryption,2538-7.html

    The slide on the first page pretty much gives it away.  I can believe huge gains in the driver overhead.  But they're also claiming huge gains in the app logic--which neither knows nor cares what API you're using.

    That said, there will be real gains from greatly reducing video driver overhead.  Those gains, incidentally, are also available in OpenGL today via extensions.  No need to wait for Windows 10 or to restrict yourself to Windows 10, or any other sort of Windows, for that matter.

    The only problem with OpenGL is the same problem OpenGL has always had.  Nobody develops "real" games for it.  Until developers start embracing it, DX12 is going to be the way to go.

    Problem may take a bit, as DX12 don't get the full use until some video that support DX12 are in more people computers, sure people may see a boost on order video card but that don't support all of DX12 and what it haves, not much option open to dev if they can't add a support to there game if your target base still have older cards.

     

    Anyhow maybe same with glnext, but will see this year, but I don't see many games will be supporting DX12 off the bat yet unless makes it easy deal with to deal with out so many problems.

     

     

  • CleffyCleffy Member RarePosts: 6,296
    Pretty much all GCN and Fermi based cards will support DX12. From what I heard DX11.2 cards will jump right into DX12 once its released.
  • justmemyselfandijustmemyselfandi Member UncommonPosts: 559
    Originally posted by Cleffy
    Pretty much all GCN and Fermi based cards will support DX12. From what I heard DX11.2 cards will jump right into DX12 once its released.

    That's pretty much how I've understood it as well. Anyone with a  fairly modern graphic card will be set from the get go.

    Star Wars The Old Republic Referral http://www.swtor.com/r/V7xMPn
    Free 7 days of sub time, unlocks, and items!
  • grndzrogrndzro Member UncommonPosts: 1,158

    If Mantle is edging ahead of DX12 then Opengl-Next will trash DX12.

    Several of the OGL 4.3 extensions can work in tandem with Mantle in OGL-Next to provide a bigger boost than either OGL or mantle. OGL-N will also be signifigantly easier to use than OGL due to a lot of depreciated API trimming.

    With OGL-N it is a win/win for both AMD and Nvidia. AMD gets access to the nifty OGL 4.3 extensions and Nvidia gets access to Mantle. All in one easy to use cross platform API.

  • KiyorisKiyoris Member RarePosts: 2,130
    Originally posted by grndzro

    If Mantle is edging ahead of DX12 then Opengl-Next will trash DX12.

    Several of the OGL 4.3 extensions can work in tandem with Mantle in OGL-Next to provide a bigger boost than either OGL or mantle. OGL-N will also be signifigantly easier to use than OGL due to a lot of depreciated API trimming.

    With OGL-N it is a win/win for both AMD and Nvidia. AMD gets access to the nifty OGL 4.3 extensions and Nvidia gets access to Mantle. All in one easy to use cross platform API.

    DirectX is used by 99% of games because

    -You have a ton of incredibly powerful libraries within the Visual Studio environment

    -Xbox

    This isn't going to change any time soon.

    AMD nor Nvidia care if you use OpenGL or DirectX, they just want to sell GPU.

     

  • NanfoodleNanfoodle Member LegendaryPosts: 9,061
    Tech is reaching a point it cant advance as things stand. Heat and power is the ceiling we are hitting. Till we find a new way to build faster chips there is not much we can do other the brute force things, like just add more sockets for more CPUs. That or we need to find a new way to program and store data. Tech giants right now are working on how to keep people upgrading when there really is no need to once you hit the ceiling. 
  • ThumbtackJThumbtackJ Member UncommonPosts: 669
    Originally posted by Hrimnir
    Originally posted by Quizzical

    It's a synthetic corner case, designed to show off what the new API can do well that the old one couldn't.  One could easily have done something analogous showing comparably enormous gains for DirectX 11 over 10, or for DirectX 10 over 9.  It's actually pretty similar to benchmarks that compare PhysX running on an Nvidia GPU versus having stupid code that does the same thing slowly on a CPU running single-threaded and not even using SSE or AVX.  Or, say, this:

    http://www.tomshardware.com/reviews/clarkdale-aes-ni-encryption,2538-7.html

    The slide on the first page pretty much gives it away.  I can believe huge gains in the driver overhead.  But they're also claiming huge gains in the app logic--which neither knows nor cares what API you're using.

    That said, there will be real gains from greatly reducing video driver overhead.  Those gains, incidentally, are also available in OpenGL today via extensions.  No need to wait for Windows 10 or to restrict yourself to Windows 10, or any other sort of Windows, for that matter.

    Nobody develops "real" games for it.  Until developers start embracing it, DX12 is going to be the way to go.

    It would certainly seem like developers are starting to embrace it. Obviously Valve ported their titles to it(HL2, CSGO, L4D2, Portal 2, TF2, DOTA2), but there's a lot of great titles outside of Valve. Dying Light,  Borderlands 2/TPS, War Thunder, Starbound, BoI Rebirth, Don't Starve, Civ5, Civ BE, Grim Fandango Remastered, Talos Principle, Kerbal Space Program, M&B Warband, Killing Floor, ETS2, XCom EU, Natural Selection 2, Wasteland 2, Witcher 2, CK2, Wargame, Metro 2033/LL Redux, Chivalry, etc. etc. etc.

     

    Unless of course by real games you mean the large marketed titles from EA, Ubisoft, and Activision. Then no, that's not going to happen for a while (if ever).

     

    Also, glNext is being presented by Valve at GDC, along with people like Johan Andersson (part of the Frostbite engine team at EA) and Niklas Smedberg (senior engine programmer for Epic Games). So hopefully this means big (and good) things are coming.

  • RidelynnRidelynn Member EpicPosts: 7,142


    Originally posted by Quizzical
    It's a synthetic corner case, designed to show off what the new API can do well that the old one couldn't.

    Yup. That's why I said "when it can take advantage of it".

    The real point to note is the comparison to Mantle - we have some real games now using Mantle that aren't written as corner cases, so you can draw some real-world conclusions extrapolating from that, and get a rough idea of what to expect with nVidia hardware.

Sign In or Register to comment.