Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!

Will the PlayStation 4 and Xbox 720 push game programmers to finally thread their games properly?

QuizzicalQuizzical Member LegendaryPosts: 25,351

Rumors have it that both the PlayStation 4 and Xbox 720 that should launch around the end of this year will use eight AMD Jaguar cores.  By modern standards, that means rather poor single-threaded performance.  But eight cores means that in well-threaded programs, you get CPU performance somewhat comparable to an FX-4300 or two cores of a Core i7-3770K.  That's not a top-end gaming system by any stretch, but it's decently capable, and console budgets really don't allow top-end gaming systems.

There are sound reasons why Jaguar cores should be attractive to both Sony and Microsoft.  For starters, they're very small, so even eight of them doesn't take all that much die space.  They're also very low power, so eight Jaguar cores at full load might still only be 15-20 W of power consumption.  The above comparables of an FX-4300 or half of a Core i7-3770K will use vastly more power than that to give the same performance.  Low power matters when you're trying to fit a console form factor.  They're also made by AMD, which is one of the two vendors that can offer modern, high performance graphics (with Nvidia being the other).  That matters if you want to integrate a CPU and GPU into a single die, which saves greatly on cost.

Less well-known is that Jaguar cores are designed to be relatively easy to move to a different process node.  Most CPU chips never try to move exactly the same chip to a different process node, as if you're going to have to redo the chip anyway, you probably should make some changes to try to increase performance.  But consoles do need to move to new process nodes multiple times to save on production cost, and have a fixed performance target with no benefits to adding more when you do die shrinks.

But then comes the huge catch:  poor single-threaded performance.  The general rule is that more cores clocked lower wins if your workload scales well to many cores, but fewer cores clocked higher wins if you can't use the extra cores.  Games aren't that hard to scale well to use many CPU cores.  But a lot of games just don't do it, for a variety of reasons.  Will the necessity of scaling well to more cores in order to run well on the new consoles finally push more game designers to implement threading properly?  Some games already do, but some don't.

A skeptic might argue that we've gone down this road before with the Cell in the PlayStation 3, and that didn't work out very well.  But from the GFLOPS numbers claimed for the Cell chip and its claimed applications, it sure looks to me like that's only high performance in special cases, such as SIMD.  That's not general-purpose enough to allow games to fully exploit the power available.  While games can scale well to many CPU cores, they do need for different CPU cores to be able to do whatever they want without any dependence on what the other CPU cores are doing at the time, and without a ton of latency if you have to do things out of order.  I don't know exactly what Cell can do, but if it could do everything needed for games, we'd have been using them in desktops and laptops a long time ago.

«1

Comments

  • NadiaNadia Member UncommonPosts: 11,798
    Originally posted by Quizzical  I don't know exactly what Cell can do, but if it could do everything needed for games, we'd have been using them in desktops and laptops a long time ago.

    i know the cell was useful for making cheap supercomputers - but no idea about gaming

     

    How the PS3 Helped Build the World's Fastest Supercomputer

    http://www.popularmechanics.com/technology/engineering/4267979

    US Air Force connects 1,760 PlayStation 3's to build supercomputer

    http://phys.org/news/2010-12-air-playstation-3s-supercomputer.html

     

  • RidelynnRidelynn Member EpicPosts: 7,383

    The 360 Xenon was already a triple core, and the PS3 Cell CPU is technically a 9-core CPU.

    So no, it won't do anything for multithreading if the next gen has 8 cores.

  • ShakyMoShakyMo Member CommonPosts: 7,207
    Well they ain't going to push graphics much.

    The xbox has an entry level gaming card, a 7770. The ps4 has something like a 7850 / 7870 so the ps4 should at least be putting all its games out in 1080p but its still behind current highish end pcs.

    The xbox specs are really weak, not much different to the Wii u. I expect Microsoft to be going for a more casual market and doing a load of stuff with kinex. I think we will see quite a few games that are ps4 / pc only.
  • BadSpockBadSpock Member UncommonPosts: 7,979

    Nothing is official yet Mo.

    Xbox next has most recently been rumored to have a "custom D3D11.1 gfx card running at 800 mhz."

    And we all know that optimization is a huge factor for the console as the engine/code can be more finely tuned to the specifics of the hardware, instead of having to be scalable to a wide range of hardware like the PC version.

    Latest rumor:

    • x64 Architecture
    • 8 CPU cores running at 1.6 gigahertz (GHz)
    • each CPU thread has its own 32 KB L1 instruction cache and 32 KB L1 data cache
    • each module of four CPU cores has a 2 MB L2 cache resulting in a total of 4 MB of L2 cache
    • each core has one fully independent hardware thread with no shared execution resources
    • each hardware thread can issue two instructions per clock
    GPU:
    • custom D3D11.1 class 800-MHz graphics processor
    • 12 shader cores providing a total of 768 threads
    • each thread can perform one scalar multiplication and addition operation (MADD) per clock cycle
    • at peak performance, the GPU can effectively issue 1.2 trillion floating-point operations per second
    High-fidelity Natural User Interface (NUI) sensor is always present

    Storage and memory:
    • 8 gigabyte (GB) of RAM DDR3 (68 GB/s)
    • 32 MB of fast embedded SRAM (ESRAM) (102 GB/s)
    • from the GPU’s perspective the bandwidths of system memory and ESRAM are parallel providing combined peak bandwidth of 170 GB/sec.
    • Hard drive is always present
    • 50 GB 6x Blu-ray Disc drive
    Networking:
    • Gigabit Ethernet Wi-Fi
    • Wi-Fi Direct


    Hardware accelerators:
     

    • Move engines
    • Image, video, and audio codecs
    • Kinect multichannel echo cancellation (MEC) hardware
    • Cryptography engines for encryption and decryption, and hashing

     

  • QuizzicalQuizzical Member LegendaryPosts: 25,351
    Originally posted by Ridelynn

    The 360 Xenon was already a triple core, and the PS3 Cell CPU is technically a 9-core CPU.

    So no, it won't do anything for multithreading if the next gen has 8 cores.

    There's a big difference between three cores and eight, especially when it's eight slow cores.

    I don't know the details of the Cell architecture, but I have the impression that it's much harder to properly thread code for than a normal multi-core CPU as you'd find with x86 or ARM or whatever.

    For an example of why this could be, let's consider GPU chips.  A Radeon HD 7970 has 2048 shaders, but they can't do 2048 completely independent things.  Rather, the architecture is very SIMD-heavy, as are all other recent graphics architectures.  Shaders are broken up into groups, and every shader in the same group can execute the same instruction at the same time, but they can't execute different instructions at the same time.  For example, they could all do a 32-bit floating point addition at the same time, but you can't have one add while another multiplies, or even have one do floating point addition while another does integer addition.

    For video cards, this works fine.  The way that OpenGL (and probably DirectX) is set up is that you write a program that takes in arbitrary vertex data and processes it in a particular way.  Every vertex in your model runs through exactly the same program, but merely starts with different data at the start.  So you take your starting values, multiply by this, add that, and so forth.  Later programmable pipeline stages take in data as output from a previous stage, but they still have a bunch of data that they run through exactly the same program.  When your models have a bunch of vertices, a bunch of triangles, a bunch of pixels, and so forth, and your program uses little to no branching (if/else) you can exploit this pretty well.

    But to try to cram CPU computations for games into something so restrictive would be much harder.  You're probably going to need pretty extensive branching, so you don't really get that many easy cases where you have a long string of instructions that get executed in the same order a bunch of times in a row.  GLSL doesn't even offer recursion, switch statements, or multidimensional arrays, because the situations where they would be useful for CPU code will completely kill your performance if you try to run them on a GPU.

    The easy way to thread CPU computations for games on a typical desktop processor is with a producer-consumer queue, where different threads may execute much of the same code, but due to branching, different cores will tend to be in different places in the code.  For a typical desktop processor, you don't lose any performance if different cores need to execute different instructions at the same time.  Can Cell do that?  If not, then it's not at all similar to an 8-core processor that is easy to thread for games.  And if it isn't crippled in any way as compared to "normal" multi-core processors, then I'd question why it didn't get much broader use.

  • QuizzicalQuizzical Member LegendaryPosts: 25,351
    Originally posted by ShakyMo
    Well they ain't going to push graphics much.

    The xbox has an entry level gaming card, a 7770. The ps4 has something like a 7850 / 7870 so the ps4 should at least be putting all its games out in 1080p but its still behind current highish end pcs.

    The xbox specs are really weak, not much different to the Wii u. I expect Microsoft to be going for a more casual market and doing a load of stuff with kinex. I think we will see quite a few games that are ps4 / pc only.

    Rumors put the Xbox 720 specs way ahead of the Wii U.  On the CPU side, it's eight cores versus three, and the three cores are awfully slow, too.  I don't know how many shaders the GPU in the Wii U has, but the Xbox 720 should have several times the memory bandwidth of the Wii U.  That would certainly let it feed a much more powerful GPU.  While I don't know what GPU the Wii U has, my guess is that it's a lot closer to a Radeon HD 6450 than a 7770.

  • QuizzicalQuizzical Member LegendaryPosts: 25,351
    Originally posted by BadSpock

    Latest rumor:

    • x64 Architecture
    • 8 CPU cores running at 1.6 gigahertz (GHz)
    • each CPU thread has its own 32 KB L1 instruction cache and 32 KB L1 data cache
    • each module of four CPU cores has a 2 MB L2 cache resulting in a total of 4 MB of L2 cache
    • each core has one fully independent hardware thread with no shared execution resources
    • each hardware thread can issue two instructions per clock
    GPU:
    • custom D3D11.1 class 800-MHz graphics processor
    • 12 shader cores providing a total of 768 threads
    • each thread can perform one scalar multiplication and addition operation (MADD) per clock cycle
    • at peak performance, the GPU can effectively issue 1.2 trillion floating-point operations per second

    Or you could just say 8 Jaguar cores at 1.6 GHz and 12 GCN CUs at 800 MHz, which is what those specs would basically have to mean.  Well, the GPU could be some next generation derivative architecture of GCN.

    AMD has said that by the end of this year, they expect their embedded semi-custom APUs to account for 20% of their revenue--which would come to over $1 billion annually.  It's hard to imagine what that could be other than consoles.  Nothing besides AMD Jaguar cores fits the CPU description, and nothing besides AMD GCN fits the GPU description.

  • BadSpockBadSpock Member UncommonPosts: 7,979
    Originally posted by Quizzical
    Originally posted by BadSpock

    Latest rumor:

    • x64 Architecture
    • 8 CPU cores running at 1.6 gigahertz (GHz)
    • each CPU thread has its own 32 KB L1 instruction cache and 32 KB L1 data cache
    • each module of four CPU cores has a 2 MB L2 cache resulting in a total of 4 MB of L2 cache
    • each core has one fully independent hardware thread with no shared execution resources
    • each hardware thread can issue two instructions per clock
    GPU:
    • custom D3D11.1 class 800-MHz graphics processor
    • 12 shader cores providing a total of 768 threads
    • each thread can perform one scalar multiplication and addition operation (MADD) per clock cycle
    • at peak performance, the GPU can effectively issue 1.2 trillion floating-point operations per second

    Or you could just say 8 Jaguar cores at 1.6 GHz and 12 GCN CUs at 800 MHz, which is what those specs would basically have to mean.  Well, the GPU could be some next generation derivative architecture of GCN.

    AMD has said that by the end of this year, they expect their embedded semi-custom APUs to account for 20% of their revenue--which would come to over $1 billion annually.  It's hard to imagine what that could be other than consoles.  Nothing besides AMD Jaguar cores fits the CPU description, and nothing besides AMD GCN fits the GPU description.

    I am so thankful for you Quiz :)

    So based on said rumor and description, what kind of power can we expect?

    Compare the "8 Jaguar cores at 1.6 GHz and 12 GCN CUs at 800 MHz" to a desktop equivalent for us?

  • DeniZgDeniZg Member UncommonPosts: 697
    I wonder how does MS plan to implement backwards compatibility, with such discrepancy between old and new CPU architecture (old 3x3,20 Mhz cores vs. new 8x1,6Mhz cores)?
  • QuizzicalQuizzical Member LegendaryPosts: 25,351

    An AMD FX-8350 severely underclocked to 1.6 GHz together with a Radeon HD 7770 at stock speeds would be in the right ballpark.

    The 7770 has 10 GCN CUs at 1 GHz, which means about 4% more GPU performance (at least as far as shaders and TMUs are concerned) than the rumored Xbox 720, or whatever Microsoft will call it.  The memory system will be very different, and the ROPs could also be rather different, since those aren't tied to how many shaders you have.

    As for the CPU, you're looking at eight slow cores.  Jaguar cores are actually the successor to Bobcat cores in AMD Z-, C-, and E- series APUs, but will be substantially faster on a per clock cycle basis.  There aren't any chips with more than two Bobcat cores, which is why I had to compare it to Piledriver cores that are meant to clock much higher.  The reason to clock the CPU cores around 1.6 GHz is that that's about what they can handle with good yields.

    It sounds like the initial top bin of Kabini (which will use Jaguar cores) will be a quad core clocked around 1.7 GHz.  Consoles have to take lower specs than you'd use on the top bin for a desktop or laptop part, as consoles can't have lower bins.  For a desktop or laptop chip, if you set your top bin to be something that only 1/3 of your chips can meet, then you can still sell most of the other chips as lower bins (clocked lower, some pieces disabled, etc.).  For a console, you have one bin, and any chips that can't meet the specs of that one bin get tossed in the garbage, so you make sure that most of your chips can meet that one bin.

  • QuizzicalQuizzical Member LegendaryPosts: 25,351
    Originally posted by DeniZg
    I wonder how does MS plan to implement backwards compatibility, with such discrepancy between old and new CPU architecture (old 3x3,20 Mhz cores vs. new 8x1,6Mhz cores)?

    Maybe they don't.  Even if they wanted to, it's not clear how even faster x86 cores would be able to handle games written for PowerPC.

  • charlespaynecharlespayne Member UncommonPosts: 381
    Thing is it is just rumers and xbox and playstation might not even go for what everyone is thinking itll use.
  • desdecardodesdecardo Member UncommonPosts: 7
    Originally posted by DeniZg
    I wonder how does MS plan to implement backwards compatibility, with such discrepancy between old and new CPU architecture (old 3x3,20 Mhz cores vs. new 8x1,6Mhz cores)?

    Im in the same boat with this.  I wasnt too pleased that backwards compatiability wasnt that great for the 360.  It really took the developers of the games to patch it for the 360.  With a lot of game developers going under, and a library of games on my shelf.  Im worried about this.

  • LoktofeitLoktofeit Member RarePosts: 14,247

    Code gets tighter and more efficient when the hardware presents more limitations. If the hardware does more work or presents less limitations, the code is not as optimized.

    Examples:

    On old versions of DOS, the executable (.COM file) needed to fit within 64k of space. As programs advanced, more concern was put into optimizing the code to keep the binary within that window. When memory allowed for larger executables (.EXE file), programs that did exactly the same work surpassed that size barrier.

    Compilers and Interpreters are a great example to use.

    Write a simple Hello World program in ASM and it's probably about 20 bytes.

    Write it in Borland C++ 3.1 and it's about 1k

    Write it in Turbo Pascal and it's about 4k.

    Move forward a few years and you've VB3, a language that can only exist with the faster 386 and up processors and the 1.44M diskettes because not only is it interpreted (measurably slower than a compiled program at the time) but it also requires a 300k library just to make the same Hello World program the others made. If floppies were still 360k or 720k and processors chugging along at 4-12Mhz, VB3 would either have not been as successful or it would have come out the gate a far more optimized language.

    (sorry for the outdated references but my coding days ended 15 years ago image)

     

    Software will always be less optimized and more bloated when it has the opportunity to. Most of the time it only becomes more efficient is when hardware limitations need to be overcome.

    There isn't a "right" or "wrong" way to play, if you want to use a screwdriver to put nails into wood, have at it, simply don't complain when the guy next to you with the hammer is doing it much better and easier. - Allein
    "Graphics are often supplied by Engines that (some) MMORPG's are built in" - Spuffyre

  • WalterWhiteWalterWhite Member UncommonPosts: 411
    Originally posted by desdecardo
    Originally posted by DeniZg
    I wonder how does MS plan to implement backwards compatibility, with such discrepancy between old and new CPU architecture (old 3x3,20 Mhz cores vs. new 8x1,6Mhz cores)?

    Im in the same boat with this.  I wasnt too pleased that backwards compatiability wasnt that great for the 360.  It really took the developers of the games to patch it for the 360.  With a lot of game developers going under, and a library of games on my shelf.  Im worried about this.

    I'm not sure if they will be looking at backward compatability due to the fact they are introducing a system that will only allow the game you bought and played on your new console, to be locked to that console, thus not allowing the sale of pre owned games.

    If they find a way around it, that would be amazing but also hard to do.

    Hopefully they see sense and scrap this idea of locking the games into the console as it will kill off console gaming imo.

     

  • sea.shellsea.shell Member Posts: 63

    Think of all the old consoles with their own format.

    Nintendo: NES / SNES / Nintendo 64
    Sega: Sega Genesis (megadrive) / Sega CD / DREAMBOX
    Atari: Atari Jaguar


    Then those "mobile" divices.
    Sega X
    Gameboy
    ...
    ...
    ..


    I doubt if the new consoles are incompatible, it will end as much of an issue as people make out.
    I myself, bought a lot of classic snes games for the playstation - again - and there are always also the EMUs for old console games on PC :)


    If incompatiblity means we will finally after 15 years of stagnation see some little revolution like back then going from 8bit to 16 / 32 bit and then the jump to polygons and CDs -

    then we all benefit from it.

    Playing: EVE Online
    Wants to play: ArcheAge, Lineage Eternal: Twilight Resistance / Star Citizen / FFXIV AAR / Neverwinter

    Used to play for 5+ years: Lineage 2, Lord of the Rings Online and Ragnarok Online

    Utter disappointing MMO experience for 1 - 3 Months:
    WAR / AoC / SWTOR / RIFT / AION / STO / TSW / GW2 / GW / Vanguard / Planetside2

  • PhryPhry Member LegendaryPosts: 11,004

    must admit to being more than a little puzzled, why would they choose to base the 'next gen' consoles on processors that are only capable of running at 1.6ghz, to me it just doesnt make sense, we're already seeing multicore processors for PC's running at 3.0+ ghz  and we're not even talking high end, but the mid range PC's as for Cores, 8 isnt really that awesome a number anymore, its already been exceeded.

    It looks more like the 'power gap' between PC's and Consoles just gets wider and wider, when even a mid range PC is capable of far more than a console that hasnt even hit the streets yet image

  • RidelynnRidelynn Member EpicPosts: 7,383


    Originally posted by Quizzical
    Originally posted by DeniZg I wonder how does MS plan to implement backwards compatibility, with such discrepancy between old and new CPU architecture (old 3x3,20 Mhz cores vs. new 8x1,6Mhz cores)?
    Maybe they don't.  Even if they wanted to, it's not clear how even faster x86 cores would be able to handle games written for PowerPC.

    Not that it applies directly to video games, but this isn't unheard of.

    Rosetta

    When Apple switched from PPC to Intel processors a few years ago, they kept support for all their PowerPC-based binaries via an emulator called Rosetta, that ran transparent in the background.

    Now, I don't know how well a Xenon or Cell CPU compares to, say a G5/Power970 (aside from the fact that all 3 are PowerPC based), but for the most part the switch wasn't that bad. Intel had poorer FPU performance, and lacked some specialized instructions (AltiVec, mainly) that were on the PPC, but general code ran fairly well, usually within about a 50-25% margin, comparing a similarly clocked dual core G5 to a dual core Core Duo (not Core2). The wide margin due to the differences in FPU/Specialized instructions (which Apple had promoted widely during their PPC years, and put a lot of optimization for in their software). Note: this isn't comparing optimized x86 code to optimized PPC code, this is comparing PPC code running natively on a PPC to PPC code running emulated on a x86.

    So, would that margin be "good enough" for games? Some, but not all. That, and the fact that the bulk of the game time is spent on the video card - and the video architecture isn't changing that much, just the CPU portion, so it wouldn't be outside the realm of possibility to see an emulator for the Gen 3 consoles, but I wouldn't count on it.

    I wouldn't be surprised if we see the more recent model that is leaned on by Nintendo and Sony (mainly for their legacy titles), where there is no or limited blanket backwards compatibility, but we see those titles re-released as compatible downloads (and yes, you'd have to pay for them all over again...), which are just the ROM wrapped up in an optimized emulator for that title.

  • RidelynnRidelynn Member EpicPosts: 7,383


    Originally posted by Phry
    must admit to being more than a little puzzled, why would they choose to base the 'next gen' consoles on processors that are only capable of running at 1.6ghz, to me it just doesnt make sense, we're already seeing multicore processors for PC's running at 3.0+ ghz  and we're not even talking high end, but the mid range PC's as for Cores, 8 isnt really that awesome a number anymore, its already been exceeded.

    It looks more like the 'power gap' between PC's and Consoles just gets wider and wider, when even a mid range PC is capable of far more than a console that hasnt even hit the streets yet


    Most consumer software, and video games in general, today are not CPU constrained. It doesn't take a faster CPU to deliver HD graphics, or more polygons, or higher resolution textures. That all leans on the GPU.

    Nintendo took a lot of flak because the CPU in the Wii U was pretty poor comparatively (3-core PPC 750 at 1.24G) - and that's more than enough for games that are coming out today. The CPU doens't run much code - it handles user input, (sometimes) physics, AI, networking... and ringleaders the GPU computations, but it's not the typical bottleneck in a game, particularly in a console game.

    The PS3 has 9 cores (1 dual core PPE and 8 single-core SPE's), one of which is reserved for the OS, the rest to game. We haven't seen anything come even remotely close to stressing the capability of that, besides FoldingAtHome and some specialized software written for the original PS3's with Linux installed (FBI, Air Force, etc).

    Now, I won't argue that having more CPU power is a bad thing: it's not - you can't leverage it if you don't have it in the first place, so maybe by having this horsepower available developers will come up with some novel uses for it (and as Quiz was indicating in his OP, that we may see this trickle back into PC gaming, where we've had many cores available for a long while now).

    I think, that as far as gaming goes, we won't. We've had multi-core CPU's availble for a long time now, particularly in the PS3, and we still seem to be stuck in a low-core (rarely does a game reach past dual cores) DX9.0c world. Unless game engines start to really push multicores (and they may, if they have it widely availble in the consoles) for items other than graphics - maybe we'll see new input that stresses general computing (higher-definition Kinect-style input to augment (not replace) traditional input devices, better speech recognition, etc), or maybe we'll get real AI (like the type that adapts and learns, not just follows rote static strategies), who knows... we haven't had anything terribly riveting since Physics hit it big (although MS would like to think Kinect is that revolutionary), but I don't see anything terribly exciting coming anytime soon.

    -----

    All of the above, plus
    These things are designed to sit in your living room, underneath your TV. Power draw is a big concern, bth while in use and while standby (Console Power Report), as is the noise the consoles make (Tips on how to quiet the cooling fan on a 360). That is why we are seeing design spec rumors with "only" 1.6Ghz CPU's and 7770-level graphics. Sure, you could fit more powerful hardware in there, but if you had a box the size of an ATX mid-tower, all the console folks would say "WTF, how can I fit that under my TV", or "It's nice and small, but sounds like a garbage disposal ate a badger because the fans are so loud", and Greenpeace would complain about how many coal power plants it takes to keep all those millions of units running for extended CoD/BF4 fights.

  • JayFiveAliveJayFiveAlive Member UncommonPosts: 601
    Originally posted by Phry

    must admit to being more than a little puzzled, why would they choose to base the 'next gen' consoles on processors that are only capable of running at 1.6ghz, to me it just doesnt make sense, we're already seeing multicore processors for PC's running at 3.0+ ghz  and we're not even talking high end, but the mid range PC's as for Cores, 8 isnt really that awesome a number anymore, its already been exceeded.

    It looks more like the 'power gap' between PC's and Consoles just gets wider and wider, when even a mid range PC is capable of far more than a console that hasnt even hit the streets yet image

    It's not a fair comparison though. Consoles run their GUI and a game with usually slightly modified hardware to suck extra performance out of it. PCs are running a ton of things all the time, have drivers that need to talk to other things, etc etc... I'd be surprised if the true power of even a 3-4 year old graphics card has ever been reached/used in the PC consumer market. Consoles tend to use more of their hardware's potential. Games can look great after like 5-6-7-8 years on a console. Imagine getting that out of a PC and still playing a very good quality without being choppy/slowdowns or needing to reduce graphics quality.

     

    I know my answer isn't necessarily perfect, but it's the general idea anyway haha.

  • BadSpockBadSpock Member UncommonPosts: 7,979

    Well yeah I don't think anyone was saying or could logically say that you can make an apples to apples comparison between PC and Console hardware based soley on the specs/speed.

    I have heard on the rumor mills that the SDK for the new Xbox is the most amazing, fluid, easy to work with, and powerful SDK *unnamed large company* has ever worked with.

    Which translates roughly to better optimization and "cleaner" code as well as ease of development and integration for more advanced features.

    The rumors state/know that the next Xbox at least will have a full DirectX 11.1 capable custom AMD video card, it's all 64 bit etc.

    I'd say it's a pretty good bet to guess that the next generation of games will be all 11.1 with all the bells and whistles that brings (like Tesselation) and that the games are going to be taking full advantages of the 8 core (probably 7 cause 1 for OS like PS3) and all the multi-threading etc.

    Just because the PS3 didn't, and because the modern PC doesn't - it doesn't mean the next gen won't.

    With Intel and AMD both pushing the # of cores higher and even rumors of Intel bringing out 8-15 core CPU's and such it's not hard to imagine games and game developers are going to start pushing performance to multi-core systems.

    In fact I bet the Xbox Next/720/Infinity whatever SDK is designed for it specifically.

  • QuizzicalQuizzical Member LegendaryPosts: 25,351
    Originally posted by Ridelynn

     


    Originally posted by Phry
    must admit to being more than a little puzzled, why would they choose to base the 'next gen' consoles on processors that are only capable of running at 1.6ghz, to me it just doesnt make sense, we're already seeing multicore processors for PC's running at 3.0+ ghz  and we're not even talking high end, but the mid range PC's as for Cores, 8 isnt really that awesome a number anymore, its already been exceeded.

     

    It looks more like the 'power gap' between PC's and Consoles just gets wider and wider, when even a mid range PC is capable of far more than a console that hasnt even hit the streets yet


     

    Most consumer software, and video games in general, today are not CPU constrained. It doesn't take a faster CPU to deliver HD graphics, or more polygons, or higher resolution textures. That all leans on the GPU.

    Nintendo took a lot of flak because the CPU in the Wii U was pretty poor comparatively (3-core PPC 750 at 1.24G) - and that's more than enough for games that are coming out today. The CPU doens't run much code - it handles user input, (sometimes) physics, AI, networking... and ringleaders the GPU computations, but it's not the typical bottleneck in a game, particularly in a console game.

    The PS3 has 9 cores (1 dual core PPE and 8 single-core SPE's), one of which is reserved for the OS, the rest to game. We haven't seen anything come even remotely close to stressing the capability of that, besides FoldingAtHome and some specialized software written for the original PS3's with Linux installed (FBI, Air Force, etc).

    Now, I won't argue that having more CPU power is a bad thing: it's not - you can't leverage it if you don't have it in the first place, so maybe by having this horsepower available developers will come up with some novel uses for it (and as Quiz was indicating in his OP, that we may see this trickle back into PC gaming, where we've had many cores available for a long while now).

    I think, that as far as gaming goes, we won't. We've had multi-core CPU's availble for a long time now, particularly in the PS3, and we still seem to be stuck in a low-core (rarely does a game reach past dual cores) DX9.0c world. Unless game engines start to really push multicores (and they may, if they have it widely availble in the consoles) for items other than graphics - maybe we'll see new input that stresses general computing (higher-definition Kinect-style input to augment (not replace) traditional input devices, better speech recognition, etc), or maybe we'll get real AI (like the type that adapts and learns, not just follows rote static strategies), who knows... we haven't had anything terribly riveting since Physics hit it big (although MS would like to think Kinect is that revolutionary), but I don't see anything terribly exciting coming anytime soon.

    -----

    All of the above, plus
    These things are designed to sit in your living room, underneath your TV. Power draw is a big concern, bth while in use and while standby (Console Power Report), as is the noise the consoles make (Tips on how to quiet the cooling fan on a 360). That is why we are seeing design spec rumors with "only" 1.6Ghz CPU's and 7770-level graphics. Sure, you could fit more powerful hardware in there, but if you had a box the size of an ATX mid-tower, all the console folks would say "WTF, how can I fit that under my TV", or "It's nice and small, but sounds like a garbage disposal ate a badger because the fans are so loud", and Greenpeace would complain about how many coal power plants it takes to keep all those millions of units running for extended CoD/BF4 fights.

    How much CPU power a game needs varies wildly from game to game.  GPUs are built to do certain things well, which loosely amounts to having a handful of very short functions (~10 lines of code, not counting declaring variables) run an enormous number of times with only the inputs to the functions changing and relatively little communication with the CPU.  (If you need the CPU to send your video card 1 byte of data per 100 float point operations that the GPU has to execute, you'll likely be bottlenecked by the PCI Express bus, not the GPU.)  Code that fits those constraints can run well on the GPU.  Anything that doesn't has to run on the CPU.

    And what can't be run on the GPU depends to some degree on what graphics API you're using.  If you want to do particle effects in OpenGL 4 or DirectX 11, for example, you can tell the video card that you want 2000 particles and it can generate them from some token amount of input data.  Or it could be 2000 particles one frame, 3000 the next, 1500 the frame after that, or whatever, and the GPU can just do it with minimal involvement from the CPU.  If you want to do that in OpenGL 3.2 (likely also DirectX 10, but I'm not certain), you can kind of do it with geometry shaders, but it's a lot more restrictive.

    If you want to do it with an older API such as DirectX 9.0c, every single primitive must be to the video card from the processor. The reason a DirectX 9.0c game that wants to have rainfall doesn't show thousands of raindrops on the screen at a time is that it basically can't without carrying a huge performance hit.  Instead, you get 10' long rain spears that look like they should one-shot you if they hit you.  With newer APIs, having an enormous number of raindrops is pretty easy to do.

    But there are still a lot of things that simply can't run well on a GPU.  You can do minor branching and looping on a GPU, but if you need to do more complex branching or very long loops, that has to use the CPU.  (Well, you may sometimes be able to technically do it on the GPU, but it will kill your performance.)  If you need to have access to change a lot of data all at once, that needs to be done on the CPU.  If you need to be able to see a lot of data at once, then unless it fits neatly into a simple lookup table (or a few of them) so that you can use textures, it has to be done on the CPU.  It's not at all obvious just from playing a game exactly how the internal code works.

    -----

    My focus in making this thread wasn't hoping that games would be able to figure out how to use more CPU power.  Rather, it was hoping that games that do happen to need a lot of CPU power would thread their code to be able to use many CPU cores.  When Guild Wars 2 players with an FX-6100 are complaining of a CPU bottleneck and the game doesn't scale past three cores, ArenaNet is doing something wrong.

    One decent test of how well a game is threaded is to say, let's suppose that you had a hypothetical CPU with 64 Jaguar cores (or Piledriver cores or Ivy Bridge cores or ARM Cortex A15 cores or whatever).  How low would you be able to clock those cores and still get a steady 60 frames per second?  Ignore that a given CPU architecture can't go below some fixed clock speed and still function, and ignore the engineering problems of feeding memory to that many CPU cores.

    If your game would run great on a 1 GHz CPU with lots of CPU cores, then your threading model is very good, even if it's purely single-threaded.  If you would need 2 GHz, that's not that bad, but it would be nice if it were better.  If you need 3 GHz, that's downright mediocre.  In the PC market, if you need 3 GHz, you can make your system requirements higher and implicitly blame customers for having slow computers.  If the Xbox 720 and PlayStation 4 only offer you 1.6 GHz, you'd better thread your game better or no one will be able to run it well.

    -----

    The Cell processor in the PS3 has one PPE that looks like it's basically a typical CPU core, but the rest of its "CPU cores" are SPEs that are much more restricted in their functionality.  I'm not sure exactly how restricted they are, but I suspect that it's enough to make fairly simple game threading models simply not work.

    If you had a CPU that could do 1 PFLOPS of 16-bit floating point addition, but couldn't do anything else, that games would be unable to push that wouldn't be because 1 PFLOPS isn't enough performance.  It  would be because the needed versatility simply wasn't there.  Making a game perform decently on such a restricted chip would be nearly impossible.

  • QuizzicalQuizzical Member LegendaryPosts: 25,351
    Originally posted by JayFiveAlive
    Originally posted by Phry

    must admit to being more than a little puzzled, why would they choose to base the 'next gen' consoles on processors that are only capable of running at 1.6ghz, to me it just doesnt make sense, we're already seeing multicore processors for PC's running at 3.0+ ghz  and we're not even talking high end, but the mid range PC's as for Cores, 8 isnt really that awesome a number anymore, its already been exceeded.

    It looks more like the 'power gap' between PC's and Consoles just gets wider and wider, when even a mid range PC is capable of far more than a console that hasnt even hit the streets yet image

    It's not a fair comparison though. Consoles run their GUI and a game with usually slightly modified hardware to suck extra performance out of it. PCs are running a ton of things all the time, have drivers that need to talk to other things, etc etc... I'd be surprised if the true power of even a 3-4 year old graphics card has ever been reached/used in the PC consumer market. Consoles tend to use more of their hardware's potential. Games can look great after like 5-6-7-8 years on a console. Imagine getting that out of a PC and still playing a very good quality without being choppy/slowdowns or needing to reduce graphics quality.

     

    I know my answer isn't necessarily perfect, but it's the general idea anyway haha.

    Open Task Manager and see just how much CPU power those background processes are using.  If everything except the game you're playing and the system idle process adds up to more than about 3%, then you should consider closing some programs--and if it's not obvious which programs to close, then you may have a bloatware (or worse, malware) problem.

  • JayFiveAliveJayFiveAlive Member UncommonPosts: 601
    Originally posted by Quizzical
    Originally posted by JayFiveAlive
    Originally posted by Phry

    must admit to being more than a little puzzled, why would they choose to base the 'next gen' consoles on processors that are only capable of running at 1.6ghz, to me it just doesnt make sense, we're already seeing multicore processors for PC's running at 3.0+ ghz  and we're not even talking high end, but the mid range PC's as for Cores, 8 isnt really that awesome a number anymore, its already been exceeded.

    It looks more like the 'power gap' between PC's and Consoles just gets wider and wider, when even a mid range PC is capable of far more than a console that hasnt even hit the streets yet image

    It's not a fair comparison though. Consoles run their GUI and a game with usually slightly modified hardware to suck extra performance out of it. PCs are running a ton of things all the time, have drivers that need to talk to other things, etc etc... I'd be surprised if the true power of even a 3-4 year old graphics card has ever been reached/used in the PC consumer market. Consoles tend to use more of their hardware's potential. Games can look great after like 5-6-7-8 years on a console. Imagine getting that out of a PC and still playing a very good quality without being choppy/slowdowns or needing to reduce graphics quality.

     

    I know my answer isn't necessarily perfect, but it's the general idea anyway haha.

    Open Task Manager and see just how much CPU power those background processes are using.  If everything except the game you're playing and the system idle process adds up to more than about 3%, then you should consider closing some programs--and if it's not obvious which programs to close, then you may have a bloatware (or worse, malware) problem.

    lol, yes, but you get my point I hope :P

Sign In or Register to comment.