Quantcast

Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!

PS4 Neo simulation with Polaris 10 chip - can it play 4k?

MalaboogaMalabooga Member UncommonPosts: 2,977
Well, ive said quite long ago that Polaris 10 chip could play games in 4k @30 FPS with console settings and console level optimizations.

Digital Foundry made a test (on PC so poorly optimized compared to consoles), and its fairly close to that target. Even if it doesnt reach true 2160p, dynamic resolution will use 1800p to keep 30 FPS on target (like they use 900/720p for 1080p on current gen consoles)



Comments

  • RidelynnRidelynn Member EpicPosts: 7,142
    I usually hate clicking videos (I have shitty ISP and they take forever), but I did watch this one.

    A lot of people will argue that dynamic resolution isn't the same thing as native resolution. Even the current PS4 ~could~ drive a 4k display, that isn't a problem, you would just have a dynamic resolution of ... whatever the game needs to keep >=60/30fps.

    That was one of the benefits of the current PS4 over XBone - it could play most titles in native 1080, not having to resort to dynamic resolution.

    That being said, totally won't surprise me if they slap "does 4K !!!!" on the box, and damn near every title is really at something much less and up-scaled.
  • rojoArcueidrojoArcueid Member EpicPosts: 10,490
    edited July 2016
    I'm OK with the PS4 NEO playing 4K(even scaled resolution) games at 30FPS. I will only upgrade to NEO if it plays games in 1080p at 60FPS, all of them. I have zero plans to buy a 4K TV until the prices hit the US$300 mark for 40"+(maybe on a black friday)




  • MalaboogaMalabooga Member UncommonPosts: 2,977
    edited July 2016
    Well, it will play all games at 60 FPS in 1080p, even with higher details than PS4. All games will be first tuned for PS4 and 30 or 60 FPS at 1080p, then Neo higher details 1080p/4k

    Ridelynn said:
    I usually hate clicking videos (I have shitty ISP and they take forever), but I did watch this one.

    A lot of people will argue that dynamic resolution isn't the same thing as native resolution. Even the current PS4 ~could~ drive a 4k display, that isn't a problem, you would just have a dynamic resolution of ... whatever the game needs to keep >=60/30fps.

    That was one of the benefits of the current PS4 over XBone - it could play most titles in native 1080, not having to resort to dynamic resolution.

    That being said, totally won't surprise me if they slap "does 4K !!!!" on the box, and damn near every title is really at something much less and up-scaled.

    actually difference is quite negligible and most of games shown are very near 30 FPS at 4k without console level optimizations. And i bet you couldnt even tell the difference ;P

  • RidelynnRidelynn Member EpicPosts: 7,142
    Hmm..

    Just curious. I know everyone always says "Consoles are more optimized than PCs", and there are a few reasons that usually get bantered about.

    But has anyone actually seen a test that can objectively measure just how much more optimized the consoles are than PC?

    I know it's almost impossible to match the hardware specs exactly, but it would be interesting if we could say, more or less definitively, that a console is about x-y% faster than a PC based on optimization.

    With DX12/Vulkan, I bet that x-y% is a lot smaller than it was in the DX9 days for the previous consoles.
  • QuizzicalQuizzical Member LegendaryPosts: 22,626
    Ridelynn said:
    Hmm..

    Just curious. I know everyone always says "Consoles are more optimized than PCs", and there are a few reasons that usually get bantered about.

    But has anyone actually seen a test that can objectively measure just how much more optimized the consoles are than PC?

    I know it's almost impossible to match the hardware specs exactly, but it would be interesting if we could say, more or less definitively, that a console is about x-y% faster than a PC based on optimization.

    With DX12/Vulkan, I bet that x-y% is a lot smaller than it was in the DX9 days for the previous consoles.
    That's a good question.  I'm skeptical that the optimization difference would be all that large.

    Let's consider where hardware-specific optimizations come from.  Let's suppose that your code needs to run on ten different pieces of hardware.  You're probably going to mostly restrict yourself to using capabilities that all of them have, or rely on tools to automatically use capabilities that only some have while still running correctly on the rest.  On the other hand, if your code only needs to run on one piece of hardware, you can exploit the full capabilities of it without worrying that other hardware can't do the same thing.

    There are a variety of ways that this can happen, including but not limited to:

    1)  instructions available in some hardware but not others
    2)  caches available in some hardware but not others
    3)  larger cache or memory capacities in some hardware than others

    So do consoles have examples of this to exploit as compared to modern PC hardware?

    On the CPU side, the answer to that is pretty much "no", apart from a caveat I'll come back to later.  AMD Bobcat cores are their small cores that strip out a lot of stuff that their larger cores would have.  Yes, they screwed up Bulldozer and its derivatives in some ways, but I'd be very surprised if Bobcat has a bunch of useful instructions not available in modern AMD or Intel desktop CPUs.

    Now, the consoles do have 8 cores, while few desktop PCs do.  But they're eight very weak cores.  Code that runs well on 8 cores of X power will nearly always still run well on 4 cores of 2X power.  It's the other way around that is a problem.  And a Core 2 Quad (at least if you clock it around 3 GHz or so), Phenom II, or Bulldozer quad core has total CPU power about on par with the total CPU power on the PS4 or Xbox One.  Eight Bobcat cores can get that performance while using much less power than the older desktop quad cores, which is why they went that route.  I could see dual core CPUs or having four very weak cores being a problem, but that's not what you get in a modern desktop PC.

    So how about the GPU side?  As compared to AMD GCN GPUs for desktops, the answer is again "no".  They're the same architecture, with the same instructions, caches, and so forth.

    As compared to Nvidia GPUs, you could run into trouble in a variety of ways.  There are quite a few instructions that Nvidia Maxwell GPUs have and AMD GCN doesn't and vice versa.  But those aren't heavily used in gaming, or if they are, they have reasonable substitutes.  For example, all modern GPUs can do integer multiplication, but the number of bits of precision in the native instruction varies a lot by architecture.  If some game decided to use a 32-bit rotate instruction a ton, that would be big trouble on Kepler.  But what is a game going to do with rotate?  Try to secretly mine bitcoins in the background while the game is running?

    Register and local memory capacities and local memory bandwidth could be a problem on Nvidia GPUs.  So could Nvidia's lack of a global data store that AMD GCN cards have.  But if console games were making a ton of use of this, why don't we see a rash of console ports that are huge, pro-AMD outliers?  Nvidia didn't pick their cache hierarchy out of a hat.  They have a pretty good idea of what games tend to use and tried to build enough for it--and for graphics purposes, at least, mostly succeeded.

    The one thing I see that could make a huge difference is that the Xbox One and PS4 both have the CPU and GPU share the same memory pool.  A console game could make a ton of use out of this, and then porting it to PC could completely choke for lack of PCI Express bandwidth, except on APUs, where it chokes for lack of GPU power.

    So do games bounce data back and forth between the CPU and GPU a ton?  For anything that goes into the graphics pipeline, the answer is pretty much "no".  For non-graphical GPU compute, your mileage may vary, but the answer will still commonly be "no".

    Another factor besides bandwidth is capacity.  The Xbox One and PS4 both have 8 GB shared by the CPU and GPU.  You could conceivably have the GPU by itself use the majority of that--meaning, more than 4 GB.  Running out of memory can bring things to a crawl in a hurry.  But there are plenty of discrete GPUs with 8 GB just on the GPU.  How many games show a huge benefit to having 8 GB of video memory rather than 4 GB?  Not many.

    Now, taking a random PC game and trying to run that code on a console could get you into trouble.  If you assume that you've got a powerful CPU core somewhere and try to run that on eight weak cores, it can choke.  If you assume that you have equal access to your full global memory and try to run that on an Xbox One, you choke because most of the bandwidth is to a 32 MB ESRAM cache, not the full DDR3.

    There can sometimes be gains to be had by restricting your hardware in ways that make it simpler and cheaper and then coding around those restrictions.  I don't think the ESRAM did that; to the contrary, that's something Microsoft screwed up.  The eight weak CPU cores probably are a way that the consoles did this, as it does give you better CPU energy efficiency and die size efficiency than needing more powerful cores clocked higher.

    But specialized restrictions on hardware only gain you so much.  Doing a complete redesign to make that specialized hardware costs a ton of money, so you only do it if the gains are large enough to justify it.  That's why both the Xbox One and PS4 simply grabbed off the shelf CPU and GPU architectures that AMD had already made.  The GPU architecture is pretty much identical to what AMD sells for gaming desktops.  The CPU architecture was already done, but merely not appropriate to a gaming desktop.
  • RidelynnRidelynn Member EpicPosts: 7,142
    I wonder how much that changed with the current generation of consoles, seeing as how the previous generation were mostly PPC architecture - with the Sony Cell CPU being pretty radically different from anything available still.
  • QuizzicalQuizzical Member LegendaryPosts: 22,626
    Ridelynn said:
    I wonder how much that changed with the current generation of consoles, seeing as how the previous generation were mostly PPC architecture - with the Sony Cell CPU being pretty radically different from anything available still.
    Probably quite a bit.  GPU architectures from different vendors are considerably more similar to each other than they used to be.  And the CPUs in the PS3 and Xbox 360 weren't much like each other, nor like the x86 CPUs you'd get for desktops.  The video cards in the PS3 and Xbox 360 also predated the unified shaders era, which radically changed how video cards worked.  That and programmable shaders are arguably the two biggest changes to video cards since the move from 2D to 3D.
  • RidelynnRidelynn Member EpicPosts: 7,142
    edited August 2016
    So looking at the current generation of hardware, we have memory which is quite a bit different than a PC, and the underlaying operating system (even though XBone now runs Win10, I'm sure it's not the same Win10 you have on your PC).

    RAM usually has a minor impact on performance, but that's only ~usually~. The jump in performance from SDR through to DDR4 hasn't been that big, looking at it step by step with respect to gaming performance. Video RAM has a pretty major impact on GPU performance. It's all unified in the current generation of consoles, and more or less on par with discrete PC GPUs. 

    Another thing to consider, modern consoles do have 8G of unified memory. The previous generation only had 512M (360 had unified RAM with 10M of eDRAM again, PS3 had 256/256 split). So previous generation could do quite a bit with a lot less RAM, although there towards the end of their 10 year lifespan they were definitely running out of room.

    And yes, Windows on the PC has a good deal of "bloat", but really, unless you have a OEM PC that's loaded with every single little widget they install, a clean default installation of Win7/8/10 is not that bloated. I'm sure there is some measurable performance penalty going through APIs and drivers, whereas on a console the two can be combined, but I doubt it's a huge delta. And DX12/Vulkan all claim to cut right through that in the first place, on the PC at least.
  • MalaboogaMalabooga Member UncommonPosts: 2,977
    You are talking about DX12/Vulkan. Of course now that developers have the tools that let them optimize on console level on desktop differences should be smaller IF they use the tools, but games tested are still DX11 games, a high level API.

    There were quite a few comparisons and consols run naywhere from 0-50% faster, and you can pretty much approximate that with a bell curve.
  • MalaboogaMalabooga Member UncommonPosts: 2,977
    edited August 2016
    At maximum same level gain in performance as console get. Not every game is equal. For instance, if there was a CPU bottleneck on PC due to single threaded DX11 nature that would be gone (consoles have 8 small/weak CPU cores and they do just fine with them because they can use all of them). Some games didnt have that problem, so in that case consoles werent doing any better.

    On the GPU side, now theres Async Compute from which developers on consoles see up to 30% performance increase and i guess on PC you would usually see anywhere for 5-25% depending on how much its used.

    Then threres shade intrisic functions which gives developers complete control over GPU and its resources.

    So generally, from CPU overhead gains will depend on which CPU do you have. If you already have fastest CPU you will see less improvement over someone will slower CPU. Same fro number of cores, CPUs with more cores will fare better. On GPU side, 20-30% performance gains.

    This is Mantle/DX11 comparison (Mantle idea was partly implemented in DX12 and Vulkan is pretty much Mantle as AMD donated Mantle to Khronos group and theyve used it as a base to build upon) and how much CPU bound game can benefit from low level API (and this is just a wrapper not native Mantle)

    Also one thing to notice, even when GPU bound (so not much difference between Mantle and DX11 in average FPS) theres big improvement in minimum FPS over DX11.

    View Full Size

Sign In or Register to comment.