Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!

Have we reached the end on single-threaded CPU performance?

2»

Comments

  • RidelynnRidelynn Member EpicPosts: 7,383


    Originally posted by Scalpless
    Yes, I'm not claiming it's anything new. However, the fact that new console games will be optimized to utilize eight cores may influence Intel's CPUs. So far, people have been ignoring those eight core CPUs, because they're a bit crappy compared to Intel's offerings performance wise. Most people ignore i7s, too, because they're just not worth the price at the moment. Them being available or not doesn't make much of a difference.

    The PS3 has an 8-core CPU...

    The AMD CPUs aren't "crappy" - they are slower per core, but traditional software design has emphasized IPC rather than core count.

    So I will agree, having more mid-core count CPUs in gaming may affect the software. However, you were claiming it would influence the hardware, and I don't see any way that it could do so.

  • QuizzicalQuizzical Member LegendaryPosts: 25,350
    Originally posted by Scalpless

    From the practical point of view, the new generation of consoles will influence consumer CPUs greatly. They've got eight 1.6 GHz cores, if I rememember correctly. Most multiplatform games will be built with that in mind. Time will tell how that'll influence their performance on standard four core CPUs. Maybe we'll see lots of games optimised for eight cores or maybe hyperthreading will finally get used. We'll probably not see many games optimised exclusively for 2+ GHz cores.

    If the average gamer doesn't need the newest Fancybridge CPU, he less likely buy it and CPU producers are less likely to focus on it.

    While the Xbox One and PS4 have eight CPU cores, they're eight slow cores, and will usually lose in performance to, say, a quad core A10-6800K, even in programs that scale flawlessly to eight cores.

    The Xbox One CPU is supposedly going to be clocked at 1.75 GHz, and I'd expect the PS4 to be around there, too.  AMD has already released products clocking Jaguar cores at anywhere from 1 GHz to 2 GHz.  Throttling Jaguar back to 1 GHz makes sense when you're trying to make a 3.9 W SoC for a tablet, but a game console can readily dissipate 100 W.

    But Jaguar cores make a ton of sense for a game console.  One general principle is that more cores clocked lower beat fewer cores clocked higher if you're running software that scales well to more cores.  While 8 Jaguar cores at 1.75 GHz won't be quite as fast as the CPU in an A10-6800K, it probably won't use half the power of the A10-6800K, either--ignoring the GPU on both for the comparison.  Getting a potent gaming CPU that only uses maybe 20 W under a fairly heavy gaming load is great for a console.  AMD Jaguar cores are really the best option there even on pure CPU performance; add to that that AMD is one of only two graphics vendors with proven high-performance graphics to make an SoC and it's no wonder that both Microsoft and Sony went with AMD.

  • ScalplessScalpless Member UncommonPosts: 1,426
    Originally posted by Ridelynn

    The PS3 has an 8-core CPU...

    The AMD CPUs aren't "crappy" - they are slower per core, but traditional software design has emphasized IPC rather than core count.

    So I will agree, having more mid-core count CPUs in gaming may affect the software. However, you were claiming it would influence the hardware, and I don't see any way that it could do so.

    PS3 has a Cell processor, which has a completely different architecture. It has nine "elements", but is not comparable to the CPUs commonly used in desktops.

    AMD CPUs perform worse in benchmarks than Intel CPUs of comparable prices. That's why they're worse. IPC isn't a good indicator of how fast a CPU is in practice.

    As for the last part, hardware is made for software, so what influences one influences another.

  • QuizzicalQuizzical Member LegendaryPosts: 25,350
    Originally posted by Scalpless

    AMD CPUs perform worse in benchmarks than Intel CPUs of comparable prices. That's why they're worse. IPC isn't a good indicator of how fast a CPU is in practice.

    As for the last part, hardware is made for software, so what influences one influences another.

    I think you are confused.  IPC = Instructions Per Cycle, loosely, performance per core per clock cycle.  Were it not for Intel's IPC advantage over AMD in their higher end CPUs, basically their whole CPU line would be junk.

    As for benchmarks, it really depends on whether you're looking at programs that scale well to many CPU cores or not.  For purely single-threaded performance, Intel usually wins in any given price range unless you're looking at Atom.  For programs that scale well to many CPU cores, AMD usually wins at a given price point.

    But the question isn't just who wins, but also by how much.  If a Core i3 barely beats an FX-6350 that costs the same in single-threaded performance, while the FX-6350 completely slaughters the Core i3 in programs that scale well to many CPU cores, I say the FX-6350 is the easy choice for gamers.  Were they close in programs that scaled to six cores and not at all close in single-threaded programs, then I'd favor the Core i3 more.

  • RidelynnRidelynn Member EpicPosts: 7,383

    The Cell processor is a PowerPC architecture.

    PowerPC has versions of OS X, Linux, and even had a version of Windows for a long time. The original PS3 even had it's own Linux distribution for a long while, which ran on the Cell CPU

    It may not be instruction equivalent, but it's safe to say that Cell is certainly analogous to a "desktop CPU"

  • ScalplessScalpless Member UncommonPosts: 1,426
    Originally posted by Quizzical
    Originally posted by Scalpless

    AMD CPUs perform worse in benchmarks than Intel CPUs of comparable prices. That's why they're worse. IPC isn't a good indicator of how fast a CPU is in practice.

    I think you are confused.  IPC = Instructions Per Cycle, loosely, performance per core per clock cycle.  Were it not for Intel's IPC advantage over AMD in their higher end CPUs, basically their whole CPU line would be junk.

    Yes, I didn't think that part of my post through. What I meant was that the clearest way to clear the whole AMD vs. Intel debate is looking at some benchmarks of CPUs in comparable price ranges. In the end, they're what really matters. IPC is something most people haven't even heard of, so it doesn't influence their decision-making, but it's true that it's probably Intel's main advantage currently.

    Originally posted by Quizzical

    As for benchmarks, it really depends on whether you're looking at programs that scale well to many CPU cores or not.  For purely single-threaded performance, Intel usually wins in any given price range unless you're looking at Atom.  For programs that scale well to many CPU cores, AMD usually wins at a given price point.

    But the question isn't just who wins, but also by how much.  If a Core i3 barely beats an FX-6350 that costs the same in single-threaded performance, while the FX-6350 completely slaughters the Core i3 in programs that scale well to many CPU cores, I say the FX-6350 is the easy choice for gamers.  Were they close in programs that scaled to six cores and not at all close in single-threaded programs, then I'd favor the Core i3 more.

    I'd agree if more programs scaled well to many CPU cores. Right now that seems limited to non-gaming benchmarks and professional software like rendering and video encoding, while games tend to run slightly better on Intel. Rendering and video encoding are things few people care about. Usually, when they buy a desktop i5 or i7, they want it to run games. If games also start running better on Bulldozer-like architecture, Intel processors will become less appealing. This may happen in the PS4/XBOne generation and it'd be interesting to see the results, which would probably include an even larger focus on multithreading.

  • RidelynnRidelynn Member EpicPosts: 7,383


    Originally posted by Scalpless
    I'd agree if more programs scaled well to many CPU cores. Right now that seems limited to non-gaming benchmarks and professional software like rendering and video encoding, while games tend to run slightly better on Intel. Rendering and video encoding are things few people care about. Usually, when they buy a desktop i5 or i7, they want it to run games. If games also start running better on Bulldozer-like architecture, Intel processors will become less appealing. This may happen in the PS4/XBOne generation and it'd be interesting to see the results, which would probably include an even larger focus on multithreading.

    I think your making some poor assumptions here:

    First off, just running an executable through a Bulldozer-esque optimization won't all of a sudden make programs run better on existing CPUs. Compilers already contain a lot of optimizations for a lot of various CPUs, Bulldozer included. The hardware is what it is, and you can't squeeze blood from a turnip - you may get a percent or two (or maybe more if it's something very extremely specific), but you won't all of a sudden see a FX6350 Beating a i5-4570 because you switched some optimizations. To use an analogy; you won't see a Honda Civic all of a sudden beating a Lamborghini in a race just because you put Premium gas in the tank.

    Now, you do make a point, if you start optimizing for 6 cores rather than 2 or 4 cores, you may see a FX6350 beat a i5-4570. But that's difficult to do, not every algorithm scales well, and while it's challenging enough to target a specific number of cores, writing a program that scales efficiently for "n" cores is very difficult to do in the general case. You can get some specific tasks that can scale well, and a lot of them that can't scale so well.

    But that's not quite the same thing as saying "Optimize for Bulldozer rather than Intel" - because that's not what your doing. It's an entirely different subject. To continue with that analogy, let's say we change the name of the race from "Fastest Car" to "Longest interval between gas stops" - now all of a sudden the Honda Civic doesn't look nearly so bad - not because we changed anything with the car, just because we changed what we were trying to do with it.

    And as far the assumption that most people buy a computer for gaming rather than rendering... I agree that ~more~ people probably buy a computer for gaming than rendering, but I would say ~most~ people buy a computer for Microsoft Office and Facebook and Angry Birds... that's the reason Intel GPUs are still the most popular GPU in the world; because most people don't game on their PC's (or if they do, it's extremely casual type gaming).

    Intel CPUs win at most gaming benchmarks right now because they have prioritized IPC, which is what most older programs require to run efficient - whereas AMD has prioritized "good enough" IPC with core counts, power use in the server realm, and superior integrated GPUs in the consumer realm. And that's the crux of the entire original post: has Intel stopped making gains in IPC because they don't have to (AMD hasn't been serious performance competition in single threaded performance since the original AMD Athlon/Pentium4 days), or because they have hit a brick wall with regard to physics and there isn't much more they can do to squeeze IPC gains out of the existing manufacturing technology.

  • QuizzicalQuizzical Member LegendaryPosts: 25,350

    One major reason why Intel CPUs look better in gaming benchmarks that you'll see on tech sites is that the sites go out of their way to find games that will run poorly on capable hardware, which gives a big bias toward showcasing badly-coded games.  In one sense, one could sensibly say that performance in games that run well on everything shouldn't affect your hardware decisions.  Even so, badly coded games tend to have much bigger problems as a result of being badly coded than merely poor CPU core scaling.

    Thinking of core scaling as "a program scales to n cores" can be a useful shorthand, but it's important to understand that that's not really the underlying reality.  It's not a case of perfect scaling to n cores with no additional gain beyond that.  Suppose that you could disable arbitrary numbers of cores of an FX-8350.  You do testing and you find that if you get x performance with one core, you get 1.8x performance with two cores, 3x performance with four cores, 4x performance with six cores, and 4.5x performance with eight cores.  How many cores does the program scale to?  That sort of scaling is actually pretty common for a variety of reasons.

    Furthermore, if a program scaled perfectly to arbitrarily many cores, then that program on an eight-core FX-8350 might give six times the performance with eight threads that you'd see with only one thread, not eight.  The CPU will clock lower when you're pushing more cores, and shared resources within the CPU itself will also decrease the per-core performance when you're using more cores.

  • IselinIselin Member LegendaryPosts: 18,719
    Originally posted by Quizzical

    One major reason why Intel CPUs look better in gaming benchmarks that you'll see on tech sites is that the sites go out of their way to find games that will run poorly on capable hardware, which gives a big bias toward showcasing badly-coded games.  In one sense, one could sensibly say that performance in games that run well on everything shouldn't affect your hardware decisions.  Even so, badly coded games tend to have much bigger problems as a result of being badly coded than merely poor CPU core scaling.

    That's a very good point but badly coded games (and other programs) are not all lumped in one particular type of game or application category. If you're going to get any useful information from looking at performance comparisons in different real-world applications for youself, you need to just focus on the games and applications you use or would use. Neither trans-coding nor zip benchmarks do very much for me since neither are things I spend much time doing.

    You also have to beware of games that are specifically optimized for certain hardware. This is much more so an issue with graphics cards than CPUs but taking a look at which games/apps a particular manufacturer bundles free with their product for special promotions should give you a clue about which performance comparisons you should take with a grain of salt.

    Besides, software vendors--particularly MMO vendors, don't necessarily code for the cutting edge hardware available. It's in their best interest to provide good performance for the lowest common denominator in their target demographic. Even programs like Photoshop, which has been able to take advantage of multiple cores since the mid 90's when "multiple cores" meant exotic motherboards with 2 or more single-core CPU sockets, shows diminishing returns in its latest version once you go beyond 2 cores and almost zero advantage beyond 4.

    The whole multicore vs. faster cores thing is a bit of circular chicken and egg discussion. Chip manufacturers started focusing on multi-core parallel performance when the technology reached a point where it was becoming increasingly difficult to realize significant gains by core clock boosts. My question now is--and I don't know the answer to it--have they kept just increasing core count and tweaking the die out of habit or because the technology still isn't there to focus once again on performance gains through speed?

    The emphasis AMD puts on speed compared to intel and the relative ease with which Intel chips--even the Haswells-- can be overclocked by 33% or more with just a good heat-pipe + fans coolers--a kind of % boost increase that was reserved for the hardcore exotic cooling OC back in the single-core days--makes me think that they could once again focus on core clocks instead of even more cores that hardly anything uses.

    Anyway...good post Quiz. Interesting discussion.

     

    EDIT: just to add a link to an article of some relevance from last year about Nanotubes replacing silicone:  http://bits.blogs.nytimes.com/2012/10/28/i-b-m-reports-nanotube-chip-breakthrough/?_r=0

     

    "Social media gives legions of idiots the right to speak when they once only spoke at a bar after a glass of wine, without harming the community ... but now they have the same right to speak as a Nobel Prize winner. It's the invasion of the idiots”

    ― Umberto Eco

    “Microtransactions? In a single player role-playing game? Are you nuts?” 
    ― CD PROJEKT RED

  • RidelynnRidelynn Member EpicPosts: 7,383


    Originally posted by Iselin
    My question now is--and I don't know the answer to it--have they kept just increasing core count and tweaking the die out of habit or because the technology still isn't there to focus once again on performance gains through speed?
     

    Because they are in the business of selling chips - and if you don't have ~something~ new to put on the side of the box, how else are you going to get all those people to upgrade?

  • QuizzicalQuizzical Member LegendaryPosts: 25,350
    Originally posted by Iselin
    Originally posted by Quizzical

    One major reason why Intel CPUs look better in gaming benchmarks that you'll see on tech sites is that the sites go out of their way to find games that will run poorly on capable hardware, which gives a big bias toward showcasing badly-coded games.  In one sense, one could sensibly say that performance in games that run well on everything shouldn't affect your hardware decisions.  Even so, badly coded games tend to have much bigger problems as a result of being badly coded than merely poor CPU core scaling.

    That's a very good point but badly coded games (and other programs) are not all lumped in one particular type of game or application category. If you're going to get any useful information from looking at performance comparisons in different real-world applications for youself, you need to just focus on the games and applications you use or would use. Neither trans-coding nor zip benchmarks do very much for me since neither are things I spend much time doing.

    You also have to beware of games that are specifically optimized for certain hardware. This is much more so an issue with graphics cards than CPUs but taking a look at which games/apps a particular manufacturer bundles free with their product for special promotions should give you a clue about which performance comparisons you should take with a grain of salt.

    Besides, software vendors--particularly MMO vendors, don't necessarily code for the cutting edge hardware available. It's in their best interest to provide good performance for the lowest common denominator in their target demographic. Even programs like Photoshop, which has been able to take advantage of multiple cores since the mid 90's when "multiple cores" meant exotic motherboards with 2 or more single-core CPU sockets, shows diminishing returns in its latest version once you go beyond 2 cores and almost zero advantage beyond 4.

    The whole multicore vs. faster cores thing is a bit of circular chicken and egg discussion. Chip manufacturers started focusing on multi-core parallel performance when the technology reached a point where it was becoming increasingly difficult to realize significant gains by core clock boosts. My question now is--and I don't know the answer to it--have they kept just increasing core count and tweaking the die out of habit or because the technology still isn't there to focus once again on performance gains through speed?

    The emphasis AMD puts on speed compared to intel and the relative ease with which Intel chips--even the Haswells-- can be overclocked by 33% or more with just a good heat-pipe + fans coolers--a kind of % boost increase that was reserved for the hardcore exotic cooling OC back in the single-core days--makes me think that they could once again focus on core clocks instead of even more cores that hardly anything uses.

    Anyway...good post Quiz. Interesting discussion.

     

    EDIT: just to add a link to an article of some relevance from last year about Nanotubes replacing silicone:  http://bits.blogs.nytimes.com/2012/10/28/i-b-m-reports-nanotube-chip-breakthrough/?_r=0

     

    Suppose that you can choose between processor A and processor B today.  Both are plenty fast enough for everything that you'll run today.  But processor A will still be plenty fast enough for everything you do five years from today, while processor B will struggle greatly with some things that you want to do in five years.  Which should you buy today?

    Now, obviously, that depends on other factors, such as the price tag.  But if they're the same price, then processor A is surely the better choice.

    Now suppose that it's late 2008 and you're looking to buy a new CPU.  The options that you find most interesting are the Core 2 Duo E8600 and the Core i7-920.  Both are conveniently the same price at a little under $300, which is about as much as you can afford. The former is faster in single-threaded programs, as it's clocked substantially higher.  The latter wins by a huge margin in programs that scale well to many cores, as it has four cores and also hyperthreading.  But few programs that you use in 2008 scale well to more than two cores, so the Core 2 Duo benchmarks better in most of the programs you run.  As a bonus, the Core 2 Duo is also more energy efficient.  Both chips are great overclockers, in case you later see a need to go that route.  Which CPU do you buy?

    If you bought the Core i7-920, then you'd still have a decent CPU today.  It's somewhat dated, but still functional, and can still run just about anything decently today.  If you bought a Core 2 Duo E8600, it would struggle greatly with some recent games.  You'd probably have replaced it by now on the basis that it wasn't performing well enough.  Not that many games got much use out of more than two cores in 2008, but a whole lot sure do today.  Game designers know that many of their customers have at least four CPU cores (about half on the Steam Hardware Survey), so if your game can put a third and fourth core to good use, a large fraction of your customer base will benefit greatly.

    In hindsight, the Core i7-920 was the far better buy, rather than the Core 2 Duo E8600--even though the Core 2 Duo benchmarked better in a large majority of programs (including most games) back in 2008.

    Come 2020, how many people do you think will be running computers with more than four CPU cores?  How many games do you think will see considerable benefit from more than four CPU cores?  I'd be willing to bet "a lot" on both counts.  And with CPU improvements slowing down, it's likely that some CPUs available today will still be decent then.  So I wouldn't scoff at the notion of buying more than four CPU cores today.

    Now, even in programs that scale well to many cores, an FX-6300 is only roughly equal to a Core i5-4670K, and an FX-8350 is only roughly equal to a Core i7-4770K.  So it's not automatic that more cores is better.  And an Intel six core system costs so much that it might well be cheaper to buy a quad core today and replace it entirely with a 6- or 8-core system four years from now than to buy a Sandy Bridge-E or Ivy Bridge-E six core system today.

    But if the choice is between an FX-6300 or a Core i3 dual core, I think that someone who buys the Core i3 today will end up looking even worse five years from now than someone who chose the Core 2 Duo E8600 over the Core i7-920 in late 2008.

    -----

    Performance per core = (performance per clock cycle per core) * (clock speed).  That's loosely the definition of IPC.  So if you want faster per core performance, you need either higher IPC or higher clock speeds.  The problem is that higher IPC is just plain hard to do:  how do you make a single-threaded CPU core that can do a lot more per clock cycle?  Meanwhile, physics gets in the way if you try for higher clock speeds, as power consumption quickly gets out of hand.  That's why single-threaded CPU performance has stalled.

  • drbaltazardrbaltazar Member UncommonPosts: 7,856
    The best company to ask is Sony(everquest 2)it is the last game that was made to scale with CPU speed.I sure would like to see a 427 gigahertz processor rendering everquest 2.but let's be real if If GPU maker (and and nvidia were to add a hardware time and substitute the one from CPU gamer would gain a lot.the CPU timer can't handle the GPU capability .but the GPU can handle the CPU capability.hpet at GPU hardware level.now that would be way better!
Sign In or Register to comment.