Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!

AMD talks about Carrizo

QuizzicalQuizzical Member LegendaryPosts: 25,347

http://anandtech.com/show/8995/amd-at-isscc-2015-carrizo-and-excavator-details

Basically, expect 5% IPC improvements, but big power savings in a lot of ways on the same process node as Kaveri.  Apparently Carrizo is due to arrive in the second quarter of this year.

The 5% IPC improvement is enough to keep pace with Intel's typical yearly improvements.  Unfortunately for AMD, "keep pace with" means "remain about 30% behind".  Zen can't come soon enough in that regard.

But large power savings at idle will extend battery life considerably.  I don't know if AMD will catch Haswell/Broadwell there, but at least getting into the same ballpark is good enough.  The difference between 0.1 W and 0.2 W matters a lot less than the difference between 1 W and 2 W--or the difference between 5 W and 10 W.

Large power savings at load mean you can do things while using less power, or you can clock the chip more aggressively.  This could be a big deal for games, allowing considerable CPU turbo while the GPU is in use.

Carrizo is also an SoC, which saves a lot on both cost and power.  It might be the reason why Kaveri didn't get used much in laptops:  Carrizo is much better for laptops and not that many months behind.

But it also explains why Carrizo isn't coming to desktops:  it would be a bad desktop chip.  You don't make an SoC with 6 SATA ports and 8 USB ports.  Laptops don't need that many ports.  But if your entire system has only 2 SATA ports, 4 USB ports, and minimal PCI Express connectivity, it's a stupid chip for desktops.

Furthermore, there are trade-offs between high performance and low power consumption.  While still a Bulldozer derivative, Carrizo goes heavily for low power consumption.  That's absolutely the right choice for the laptop market that it targets, but terrible for desktops.  If a desktop part could only have the CPU clock up to 3.5 GHz, even with the IPC increase, that's still slower than Kaveri or even Richland.  The GPU is better than Richland, to be sure, but even if the part manages to be as fast as Kaveri in a desktop while using 30% less power, so what?

I didn't find any mention of Carrizo's PCI Express bandwidth.  It's not clear to me whether there will be enough to reasonably attach a discrete video card.  Broadwell, for example, doesn't have enough bandwidth for that.  Nor do Bay Trail or Beema.  But that's probably all right for the markets AMD is targeting with Carrizo; if you wanted a high-powered gaming laptop, you'd want an Intel CPU to go with that discrete video card.

Also important is that AMD claims that they're getting these huge gains on the same process node as before.  In one sense, this leaves the status quo unchanged in the AMD versus Intel wars:  AMD has better GPUs and Intel better CPUs.  But AMD got their big gains without a new process node, while Intel had the biggest process node jump they had in many years.  Moving to 14/16 nm should offer huge gains when AMD can do that, hopefully next year.

Comments

  • RidelynnRidelynn Member EpicPosts: 7,383

    Can't say it's really surprising.

    PC CPUs have been "fast enough" for a while now, for various reasons. Not least of which is that they are not a rapidly growing market, so the R&D simply isn't there like it used to be. Software development has shifted gears majorly and not pushed the need for anything faster in the past several years: growth has been "in the cloud" or "mobile".

    I'm not surprised that Intel still pushes it, but the speed difference between Intel and "everyone else" becomes less and less relevant each year.

  • FearumFearum Member UncommonPosts: 1,175
    Has hardware jumped way ahead of programmers and their code? Is it waiting for them to catch up? Haven't really seen any huge improvements on the hardware front for me to upgrade too for a few years. I still have a 3770k and gtx 680 which do pretty well, upgraded to that from a 2500k when the motherboard took a crap in 2012.
  • RidelynnRidelynn Member EpicPosts: 7,383

    Part of it is that hardware has stopped making huge IPC advancements, that happened a little over 10 years ago, and started branching outwards via parallelism to gain performance. More cores vs Faster cores.

    The first dual core pentium released in 2005. That happened because they couldn't really make the pentium faster anymore. So they just put more pentiums in your pentium. Now that was far from the first multi-core CPU, but it was probably the first to make it into main stream PCs in large numbers.

    That's a different paradigm of programming all together.

    And once you start thinking "parallel" modes of operation, you are no long necessarily restricted to one machine - that's why we see a lot of the deep computing projects go out via distributed computing

    So your Desktop CPU doesn't matter nearly as much as it used to.

    That, and the PC market is stagnant, so why invest a lot of money into a sector that is stale, isn't seeing growth, and isn't demanding much more in terms of performance.

    And if you don't believe that, just peek over at the mobile sector, which has exploded in the last 5 years, CPU speeds there are double/triple/quadruple every year (like the PC market did back in the 90's when it exploded).

    Take a company like Qualcomm, which makes popular ARM CPUs for smart phones. Their earnings have more than doubled in the last 5 years. They are now roughly half the size of Intel in terms of revenue, from 3B to in excess of 7B per quarter from 2010-2015. In the same time frame, Intel has grown, from around 11B to 14B, rather than the 100% experienced by Qualcomm, and most of that growth from Intel was on the back of Data Center and "Internet of Things" - not PC-centric hardware.

  • GdemamiGdemami Member EpicPosts: 12,342


    Originally posted by Ridelynn

    Part of it is that hardware has stopped making huge IPC advancements, that happened a little over 10 years ago

    Good luck finding any data backing that up...

  • GdemamiGdemami Member EpicPosts: 12,342


    Originally posted by Fearum

    Has hardware jumped way ahead of programmers and their code?

    When it comes to CPU and gaming, yeah...

    Pretty much anything today is quite playable on ancient core 2 duo.

  • RidelynnRidelynn Member EpicPosts: 7,383

    Intel 486 - released late 1989, starting at a stock clock of 20Mhz, and eventually goin up to 100Mhz

    Pentium (P5) - released early 1993, starting at a stock clock of 60Mhz, with several other IPC improvements over clock speed, eventually going up to 300Mhz

    Pentium Pro/PII/PIII(P6) - released in late 1995, starting at a stock clock of 150Mhz, eventually going up to 600Mhz

    Pentium 4/D (Netburst) - released in late 2000, starting at a stock clock of 1.4Ghz, eventually going up to 3.73Ghz. This generation introduced Hyperthreading as well. Netburst was known for it's deep pipeline, which required ever increasing clock speeds to improve performance.

    Now let's stop there. That covers a bit over 15 years of CPU history. Clock speed isn't the only thing that goes into performance, but making two pretty safe assumptions:
    - Clock speed for a given instruction set does directly correlate to performance
    - Each iteration introduced some new technology that generally increased IPC

    Just with those two assumptions, inside a generation we see an increase of 3-4x speed via clock increases alone, and we go through to a new generation about every 4 years.

    We haven't seen the clock speed double since - in fact, the fastest stock clock today is 4.0 (not counting Turbo clocking) - which is barely faster than the Pentium 4 era.

    Intel Core came out after Penitum 4/D - it had an emphasis on power efficiency, it significantly cut the clock speed back from Netburst on it's initial release (not uncommon, but this was a much larger delta than had been seen in the past), it really started to push multi-core designs. It cut the instruction pipeline in half from Netburst.

    At this point, Intel also switched to it's tick/tock product cycle - we see something new every 2 years now, but ever tick is just a die shrink, we don't get the "new" generation until the tock. THat's supposed to continue on or around the same approximate 4 year schedule.

    Core is where we really see the slowdown, and there are plenty of people still running perfectly capable rigs from CPUs of around this era. Before Core - performance doubled about every 1.5 years (right along with Moore's law). We don't see that any more. Moore's law states that transistor count will double (and we do still see that, so Moore's law isn't dead yet), but it had become so common to expect that performance would double along with that, that most people forgot. Instead, now we see about 10-15% performance between iterations.

    In fact, let's look to a benchmark - just glancing at Passmark Single Core benchmark scores - the fastest single core CPU right now is the i7 4790k. It has a score of 2534. It "doubled" from a score of 1267 - a score garnered by the Core 2 Quad Q9650, released in 2008 - requiring 6 years to double CPU performance, rather than the earlier 1.5 year timeline - a significant slowdown.

    Now, the only way that Intel (or any other x86 licensee) can claim to "double" performance is by doubling core count. And that gets back into the programming issue - it only doubles performance if you can perfectly leverage the multiple cores and ignore communication overhead - and most programs cannot do that.

  • QuizzicalQuizzical Member LegendaryPosts: 25,347

    IPC is Instructions Per Cycle, or loosely performance divided by clock speed.  While single-threaded CPU performance increased rapidly from the 1960s all the way until about 2008 or so, that was driven mostly by increases in clock speeds.  There were also IPC increases along the way, most notably with the Pentium Pro, Athlon 64, and Core 2, but most of the performance gain was clock speed increases.

    What's different now is that clock speeds have topped out, and are as likely to be slower as faster on successive process nodes.  So that leaves IPC as basically the only source of performance increases available, and that's slow going.

    As for a Core 2 Duo being good enough for everything, no, it's not.  It's good enough for a lot of things, certainly, but not everything.  My Core i7-860 struggles with Civilization IV, for example.  It would probably struggle with EverQuest II, though I haven't tried it.  And those games launched in 2005 and 2004, respectively.  Never underestimate the ability of badly coded software to perform poorly.

  • QuizzicalQuizzical Member LegendaryPosts: 25,347

    What AMD said about Carrizo could also be a preview of the Radeon R9 390X (or whatever they decide to call their next flagship card).  There are a variety of conflicting rumors about it, but considering that AMD only released one discrete GPU chip in all of 2014, they're about due to have something new soon.

    The rumors that I'd regard as most credible have AMD moving the Radeon R9 390X to Global Foundries' 28 nm SHP process node--the same as is used for Carrizo, and is already used for Kaveri and Beema/Mullins.  AMD has already put in the work of heavily optimizing their GPU architecture for that process node as is necessary for Carrizo, and much of that work would also be useful if they wanted to build a discrete GPU on the same node.

    That said, while most of the GPU architecture will be the same for Carrizo as a discrete GPU chip, Carrizo lacks a GDDR5 memory controller.  Some rumors also claim that AMD will use HBM rather than GDDR5 for the Radeon R9 390X, though I regard that as less credible than claims of moving to Global Foundries.  Still, they did launch a Radeon HD 4870 with GDDR5 before GDDR5 even officially existed.

    Regardless, rumors about future AMD GPU chips are just that:  rumors.  It's plausible that AMD could wait for 14/16 nm for their next discrete video card, but that could be a long year+ with AMD trailing far behind Nvidia's Maxwell in all of the efficiency metrics in the meantime.

  • HrimnirHrimnir Member RarePosts: 2,415
    Originally posted by Ridelynn

    Part of it is that hardware has stopped making huge IPC advancements, that happened a little over 10 years ago, and started branching outwards via parallelism to gain performance. More cores vs Faster cores.

    The first dual core pentium released in 2005. That happened because they couldn't really make the pentium faster anymore. So they just put more pentiums in your pentium. Now that was far from the first multi-core CPU, but it was probably the first to make it into main stream PCs in large numbers.

    That's a different paradigm of programming all together.

    And once you start thinking "parallel" modes of operation, you are no long necessarily restricted to one machine - that's why we see a lot of the deep computing projects go out via distributed computing

    So your Desktop CPU doesn't matter nearly as much as it used to.

    That, and the PC market is stagnant, so why invest a lot of money into a sector that is stale, isn't seeing growth, and isn't demanding much more in terms of performance.

    And if you don't believe that, just peek over at the mobile sector, which has exploded in the last 5 years, CPU speeds there are double/triple/quadruple every year (like the PC market did back in the 90's when it exploded).

    Take a company like Qualcomm, which makes popular ARM CPUs for smart phones. Their earnings have more than doubled in the last 5 years. They are now roughly half the size of Intel in terms of revenue, from 3B to in excess of 7B per quarter from 2010-2015. In the same time frame, Intel has grown, from around 11B to 14B, rather than the 100% experienced by Qualcomm, and most of that growth from Intel was on the back of Data Center and "Internet of Things" - not PC-centric hardware.

    Completely agree.  Most of the processor enhancements in Intel have (lately) come from the server side of things and those R&D developments transferred over to the desktop side of things.

    I think like you said the PC market is mostly stagnant outside of the gaming/enthusiast market, which i dont really see going anywhere, especially if there aren't anymore proper consoles and steamboxes or similar type computers become a thing.

    I still don't believe mobile is the future, gaming on tablets and such is really the domain of "casual" gamers, i honesty think we've seen that market hit or come close to hitting its peak.

    "The surest way to corrupt a youth is to instruct him to hold in higher esteem those who think alike than those who think differently."

    - Friedrich Nietzsche

  • HrimnirHrimnir Member RarePosts: 2,415
    Originally posted by Ridelynn

    Now, the only way that Intel (or any other x86 licensee) can claim to "double" performance is by doubling core count. And that gets back into the programming issue - it only doubles performance if you can perfectly leverage the multiple cores and ignore communication overhead - and most programs cannot do that.

    This is really the biggest issue we face in the performance department, not the hardware, the software.

    I don't know its laziness, half assery, overworking, etc.  But way too much code gets released in programs that is just bad, i mean really bad.  You see it all the time in game engines, games themselves, etc.   I don't know why in the hardware side of things stuff is so much better done, and why in software people are allowed/able to get away with using bad "tricks" and shortcuts, and stupid things that cause problems.  I don't code myself so i can't really speak to what it is, but i have a lot of friends who do and they complain about having to fix other people's code or when they have to figure out a problem in a program and find something someone did in a really lazy way, etc.

    I remember in one specific incident, a friend of mine had to look into a program that was written by his predecessor that was running very poorly and causing a lot of complaints within the company.  He pulls it up, finds out the guy basically did a really lazy half assed brute force method, basically rewrites the whole thing in a couple months, was able to cut it down to about 15% of the lines of code, and the program ran roughly 80% faster, used more than half as much memory, etc, and was returning queries in less than 10 seconds rather than several minutes... All because he actually put the effort in to just do it the right way.

    "The surest way to corrupt a youth is to instruct him to hold in higher esteem those who think alike than those who think differently."

    - Friedrich Nietzsche

  • QuizzicalQuizzical Member LegendaryPosts: 25,347
    Originally posted by Hrimnir
    Originally posted by Ridelynn

    Now, the only way that Intel (or any other x86 licensee) can claim to "double" performance is by doubling core count. And that gets back into the programming issue - it only doubles performance if you can perfectly leverage the multiple cores and ignore communication overhead - and most programs cannot do that.

    This is really the biggest issue we face in the performance department, not the hardware, the software.

    I don't know its laziness, half assery, overworking, etc.  But way too much code gets released in programs that is just bad, i mean really bad.  You see it all the time in game engines, games themselves, etc.   I don't know why in the hardware side of things stuff is so much better done, and why in software people are allowed/able to get away with using bad "tricks" and shortcuts, and stupid things that cause problems.  I don't code myself so i can't really speak to what it is, but i have a lot of friends who do and they complain about having to fix other people's code or when they have to figure out a problem in a program and find something someone did in a really lazy way, etc.

    I remember in one specific incident, a friend of mine had to look into a program that was written by his predecessor that was running very poorly and causing a lot of complaints within the company.  He pulls it up, finds out the guy basically did a really lazy half assed brute force method, basically rewrites the whole thing in a couple months, was able to cut it down to about 15% of the lines of code, and the program ran roughly 80% faster, used more than half as much memory, etc, and was returning queries in less than 10 seconds rather than several minutes... All because he actually put the effort in to just do it the right way.

    1)  Not all programmer are competent.

    2)  Not all competent programmers are given the time necessary to make a good program before they have to move on.

    Still, if the new version of the program was only 15% as long as the old, that doesn't sound like case (2).

    -----

    It's much cheaper to make bad software than bad hardware.  It costs millions of dollars just to make the masks so that you can build a chip on a modern process node.  If you decide it doesn't do what you need and you need to redo it, pay millions of dollars again.  And that's entirely separate from the fab costs to actually build the chips you want.  If it cost millions of dollars to compile a program, you'd see a lot less bad software.  And a lot less good software, for that matter.

  • HrimnirHrimnir Member RarePosts: 2,415
    Originally posted by Quizzical
    Originally posted by Hrimnir
    Originally posted by Ridelynn

    Now, the only way that Intel (or any other x86 licensee) can claim to "double" performance is by doubling core count. And that gets back into the programming issue - it only doubles performance if you can perfectly leverage the multiple cores and ignore communication overhead - and most programs cannot do that.

    This is really the biggest issue we face in the performance department, not the hardware, the software.

    I don't know its laziness, half assery, overworking, etc.  But way too much code gets released in programs that is just bad, i mean really bad.  You see it all the time in game engines, games themselves, etc.   I don't know why in the hardware side of things stuff is so much better done, and why in software people are allowed/able to get away with using bad "tricks" and shortcuts, and stupid things that cause problems.  I don't code myself so i can't really speak to what it is, but i have a lot of friends who do and they complain about having to fix other people's code or when they have to figure out a problem in a program and find something someone did in a really lazy way, etc.

    I remember in one specific incident, a friend of mine had to look into a program that was written by his predecessor that was running very poorly and causing a lot of complaints within the company.  He pulls it up, finds out the guy basically did a really lazy half assed brute force method, basically rewrites the whole thing in a couple months, was able to cut it down to about 15% of the lines of code, and the program ran roughly 80% faster, used more than half as much memory, etc, and was returning queries in less than 10 seconds rather than several minutes... All because he actually put the effort in to just do it the right way.

    1)  Not all programmer are competent.

    2)  Not all competent programmers are given the time necessary to make a good program before they have to move on.

    Still, if the new version of the program was only 15% as long as the old, that doesn't sound like case (2).

    -----

    It's much cheaper to make bad software than bad hardware.  It costs millions of dollars just to make the masks so that you can build a chip on a modern process node.  If you decide it doesn't do what you need and you need to redo it, pay millions of dollars again.  And that's entirely separate from the fab costs to actually build the chips you want.  If it cost millions of dollars to compile a program, you'd see a lot less bad software.  And a lot less good software, for that matter.

    I get that, and i don't want to sound pithy.  I'm sure there's a lot of guys who are put under ridiculous time crunches that would love to do it right but have to half ass it.  That being said, there are a LOT of lazy programmers or barely competent ones.  I have lots of friends that do various coding, from database to graphics to web development, etc.  And i get that complaint from all of them, constantly.

    As far as my friends situation, i'd have to ask him the details, but he works for a federally funded government research company that does a lot of DOD and military stuff, so, generally speaking, he is given the funding and time he or the team needs to complete the task properly (as usually people's lives can depend on these things, and/or some colonel or general somewhere is mega pissed about something performing bad and wants it fixed, period.).

    But, as usual you make very valid and pertinent points.  I gotta stop being so emotionally driven in these discussions ;-)

    "The surest way to corrupt a youth is to instruct him to hold in higher esteem those who think alike than those who think differently."

    - Friedrich Nietzsche

  • RidelynnRidelynn Member EpicPosts: 7,383

    Not to make excuses for the 15% fellow, but yeah, there are mitigating circumstances in every instance.

    The first rule of programming is it has to work.

    Then the second rule is that it should work well.

    If he spent 2 months rewriting a piece of code that originally took, maybe an hour?, to do badly - that's a lot of work time.

    I don't know what it costs to hire out a programmer by the hour, but half-assed for one hour's worth of labor, versus elegent and well-running for 320 hours.

    I'm obviously stretching this out for hyperbole's sake, but yeah - that's why programmers don't always get the time they would like to do it right - time costs money. Everything has a Return on the Investment - if the extra time from that query is compounded by thousands/millions of users, it could very well have been worth the extra money and effort to do it right in the first place. If it's just something on a back end backup server that runs once a week (just for instance), then maybe there is never a payback for something like that.

  • GdemamiGdemami Member EpicPosts: 12,342

    ...and still no data. Twaddle about clock speeds :)
    Not even understanding what IPC is...



    Originally posted by QuizzicalIt would probably struggle with EverQuest II, though I haven't tried it.

    You would be surprised how much of what you say falls into that category - "haven't tried", having no backup in data.

    Core i7-860 struggling with EQ2...

  • RidelynnRidelynn Member EpicPosts: 7,383

    Oh, I'm sorry, you missed the part where I provided a benchmark that gets right to the IPC issue, apart from clock speed alone.

    But that's general discussion for readers, since you haven't exactly offered any disproof yourself to continue your particular claim.

  • QuizzicalQuizzical Member LegendaryPosts: 25,347
    Originally posted by Gdemami

    ...and still no data. Twaddle about clock speeds :)
    Not even understanding what IPC is...

     


    Originally posted by Quizzical

     

    It would probably struggle with EverQuest II, though I haven't tried it.

     


     

    You would be surprised how much of what you say falls into that category - "haven't tried", having no backup in data.

    Core i7-860 struggling with EQ2...

    Well then.  I'll defer to the judgment of those who have run every game that ever has been made or ever will be made on every hardware combination possible.  Just as soon as you can find such a person.

  • HrimnirHrimnir Member RarePosts: 2,415
    Originally posted by Quizzical
    Originally posted by Gdemami

    ...and still no data. Twaddle about clock speeds :)
    Not even understanding what IPC is...

     


    Originally posted by Quizzical

     

    It would probably struggle with EverQuest II, though I haven't tried it.

     


     

    You would be surprised how much of what you say falls into that category - "haven't tried", having no backup in data.

    Core i7-860 struggling with EQ2...

    Well then.  I'll defer to the judgment of those who have run every game that ever has been made or ever will be made on every hardware combination possible.  Just as soon as you can find such a person.

    Im gonna have to agree with him, i built a brand new PC specifically for EQ2 when it came out, it had a GeForce 6800 GT and a single core athlon 64, can't remember how fast.  Either way, the engine was terribly optimized and with max settings at 1280x1024 the game ran roughly around 20-35 fps.

    There should be absolutely 0 reason a relatively modern quad core i7 would struggle in any way shape or form with EQ2, unless they've gone positively apeshit with the graphics in the later expansions (i played up to around 2007 or so).

    "The surest way to corrupt a youth is to instruct him to hold in higher esteem those who think alike than those who think differently."

    - Friedrich Nietzsche

  • RidelynnRidelynn Member EpicPosts: 7,383

    I just tried EQ2, because I'm all about providing data in this thread

    GTX980 on a i7 4790k

    1920x1200, max everything (because that's what people care about, apparently).

    Yup, dips down as low as 40fps in starter town, with very few other people on screen. I don't know how to get to busy areas, but if it's already dipping in the starter area, I don't know that I need to go much farther to show it's not the greatest engine.

    FWIW - even EQ1 chokes, hard, on this (or any) machine in busy areas - not because they necessarily have done a lot with the graphics, just that it's a very old engine and isn't as efficient as it could be.

  • QuizzicalQuizzical Member LegendaryPosts: 25,347
    Originally posted by Hrimnir
    Originally posted by Quizzical
    Originally posted by Gdemami

    ...and still no data. Twaddle about clock speeds :)
    Not even understanding what IPC is...

     


    Originally posted by Quizzical

     

    It would probably struggle with EverQuest II, though I haven't tried it.

     


     

    You would be surprised how much of what you say falls into that category - "haven't tried", having no backup in data.

    Core i7-860 struggling with EQ2...

    Well then.  I'll defer to the judgment of those who have run every game that ever has been made or ever will be made on every hardware combination possible.  Just as soon as you can find such a person.

    Im gonna have to agree with him, i built a brand new PC specifically for EQ2 when it came out, it had a GeForce 6800 GT and a single core athlon 64, can't remember how fast.  Either way, the engine was terribly optimized and with max settings at 1280x1024 the game ran roughly around 20-35 fps.

    There should be absolutely 0 reason a relatively modern quad core i7 would struggle in any way shape or form with EQ2, unless they've gone positively apeshit with the graphics in the later expansions (i played up to around 2007 or so).

    The reason I picked out EQ2 in particular is that it's notorious for having a single-threaded game engine that has that one thread do a lot of graphics work rather than offloading it to a CPU.  Going from one core to four (or, for that matter, sixteen) doesn't help if the program is single-threaded.

    Around the time EQ2 launched, Intel was promising that their NetBurst architecture would scale all the way to 10 GHz.  Then physics got in the way.  Now, Intel could build a 10 GHz processor if they wanted to--and if the only requirement is that it has to run at 10 GHz, and not that it has to do anything useful at that clock speed.  But a 10 GHz CPU whose only operations are the simple binary logicals isn't going to be very useful, so they didn't.

    For what it's worth, my Core i7-860 is a quad core i7, but it's from 2009.  It has perhaps 1/2 to 2/3 of the CPU performance of Ridelynn's Core i7-4790K in most cases (e.g., excluding AVX-heavy workloads).

  • grndzrogrndzro Member UncommonPosts: 1,162

    I will have to back up the abysmal EQ2 performance. When I first tried it I had an overclocked Athlon64 x2 and ATI 1950XTX.

    I tried it years later on my 4ghz Phenom II x6 /w radeon 5770 crossfire. It was still terrible.

Sign In or Register to comment.