Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!

AMD Radeon HD 7990 launches. World doesn't care.

13

Comments

  • RidelynnRidelynn Member EpicPosts: 7,383


    Originally posted by Darkness690
    Originally posted by Mtibbs1989 Originally posted by Quizzical AMD today launched the Radeon HD 7990, which basically constitutes two Radeon HD 7970s on a single card.  It's a little faster than a GeForce GTX 690, but also uses a lot more power, so if you're limited by power, the latter is the better buy.  It's also more expensive than two Radeon HD 7970 GHz Edition cards while being slower, so if you're not sharply limited by power, two (or three!) 7970 GHz Edition cards make more sense.  If you want the fastest performance you can get in a single slot, the older unofficial 7990s from PowerColor and Asus are clocked higher, so the new 7990 is slower than those.  If you want max performance at any price, then two or three GeForce GTX Titans is the way to go, as that has considerably faster GPUs. So basically, AMD just launched a new card and I don't see any reason why anyone would want to get one.  But hey, it's finally here after being delayed by a year or so from the initial rumors.
     The Titan isn't a card that's "better" than the GTX 690. They've stated this a few times already.  “You’re going to see some people who just say, "I want maximum frame rate" and if you want maximum frame rate, GTX 690 is you.” - Nvidia’s Tom Petersen "If you want the best experience, if you want the best acoustics, then Titan is for you.”  - Nvidia’s Tom Petersen Don't argue otherwise, this is directly from Nvidia themselves.
    The Titan isn't better as a single card, but it has a faster GPU. If you have an unlimited budget, 4 GTX Titans is the way to go. Remember, the GTX 690 is a dual GPU card so you're only limited to having two cards.

    I also kinda chuckled at these nVidia quotes - straight from nVidia's "Director of Technical Marketing".

    No way we can argue the validity of those quotes - they are straight from nVidia themselves.

  • QuizzicalQuizzical Member LegendaryPosts: 25,355
    If you're only comparing those two cards to each other and not to anything else, then the quotes are correct.  A GeForce GTX 690 will get you better average frame rates, while a GeForce GTX Titan will get you a better gaming experience due to having to mess with SLI.
  • CleffyCleffy Member RarePosts: 6,412
    Originally posted by Quizzical
    Originally posted by Cleffy
    Its still true today a little bit.  AMDs architecture is more effecient, just not utilized.  There are actually some CPU applications where AMD is the better pick like Software 3D rendering.  This is with a CPU that is a full node behind the closest competition and costs less then any competition close to its results.

    No, AMD's CPU architecture certainly is not more efficient than Intel's.  Never mind Ivy Bridge.  Compare Sandy Bridge-E to Vishera if you like, as those are both 32 nm.  A Core i7-3960X beats an FX-8350 at just about everything, in spite of having a much smaller die and using about the same power.

    3D rendering is a corner case of, if you support an instruction before your competitor, then you win at applications that can spam that new instruction.  Piledriver cores support FMA and Ivy Bridge cores don't.  But that's a temporary advantage for AMD, as Haswell will support FMA.

    FMA is Fused Multiply Add.  What it does is to take three floating-point inputs, a, b, and c, and return a * b + c in a single step.  Obviously, it's trivial to do the same thing in two steps, by doing multiplication first and then addition.  But doing it in a single step means that you can do it twice as fast.

    FMA is hugely beneficial for dot products and computations that implicitly use dot products, such as matrix multiplication.  3D graphics uses a ton of matrix multiplication, as rotating a model basically means multiplying every single vertex by some particular orthogonal matrix.

    But what else can make extensive use of FMA outside of 3D graphics?  Not much, really.  Some programs can use it here and there.  Doing the same thing for integer computations isn't supported at all.  That's why video cards have long supported FMA (any rated GFLOPS number you see is "if the video card can spam FMA and do nothing else"), but x86 CPUs didn't support it until recently.

    Corner cases from new instructions happen from time to time.  When Clarkdale launched, a Core i5 dual core absolutely destroyed everything else on the market in AES encryption, even if threaded to scale to many cores, because it supported AES-NI and nothing else did.  But that advantage only applied to AES, and now recent AMD CPUs support AES-NI, too.  So now it's a case of Core i3, Pentium, and Celeron processors are awful at AES because Intel disables AES-NI on them, while all other recent desktop processors are fast at it.

    lol should have worded it better since it is not more effecient in current workloads.  My main argument is that current software poorly utilizes AMDs current design despite what its architecture is capable of. The current generation intel chips are 1.4 billion transistors on a 22nm process including a GPU die with a TDP of 84 watts.  The current generation AMD chips are 1.2 billion transistors on a 32nm process not including a GPU die with a TDP of 125 watts.  From a fundamental standpoint a chip with more transistors clocked higher will perform better.  So why isn't AMD?  Its clocked higher and if you negate Intels GPU, it has more transistors.
    I really don't want to go indepth into the size of L1 caches and operations per cycle possible as a result.  This is not my strong point and I don't do much research into this.  What I can say and you will most likely agree on is that in single threaded applications, Intel will perform significantly better.  In multi-threaded applications, and those that are scaled to more than 4 threads, AMD is better.

    I think when you boil it down, single-threaded applications typically don't require much computational power otherwise they would be multi-threaded.  So in the cases of single threaded applications, AMD will do the job.  This is unless 6 seconds compressing a video makes a difference to you.  Yet, in applications that need the threads AMD has an advantage in this sector without factoring in their 2 integer cores per module or FMA instruction set.  To me AMD has a much more forward looking design that just is not being utilized much outside of niche businesses.

    Also from rumors, Jaguar will perform 30% better than Piledriver in single threaded apps compared to the 10% increase going from Ivybridge to haswell.

  • ZezdaZezda Member UncommonPosts: 686
    30% better performance than piledriver would be nice. But it still wouldn't catch Intel.
  • RidelynnRidelynn Member EpicPosts: 7,383


    Originally posted by Cleffy
    What I can say and you will most likely agree on is that in single threaded applications, Intel will perform significantly better.  In multi-threaded applications, and those that are scaled to more than 4 threads, AMD is better.

    You can only draw that conclusion from the fact that AMD tends to sell chips with more physical cores than Intel, which has very little to do with anything software-related.

    You could make the case that AMD's modular core design makes it cheaper to offer more cores, as each module performs as dual cores, as compared to Intel's method of actual discrete cores and HyperThreading. And then that on the basis of cost versus performance that AMD is superior - which may have some basis: an CPU load designed for 8 cores will likely perform better on AMD's 8350 with 8 full cores presented from 4 modules than it will on Intels i7-3770 with 4 full cores and 4 virtual HT cores.

    Now, if this is actually cheaper to produce, or if AMD is just selling them with a lower profit margin to maintain market viability; I can't say, but the retail price is what really matters to the market.

    If you want go continue down the software-optimization path, and say that AMD would perform better if software were more efficiently threaded; that would largely be true. However, keep in mind that Intel would not stay stagnate in that case: Intel has optimized their existing CPUs largely for power envelope and single-threaded performance because that's what drives the market ~now~. It would take very little for Intel to start offering more physical cores if/when the market leans more in that direction: they already have dies that have competing numbers of physical cores in the Xeon and E series, they just charge a premium for them because it's hardly a mainstream necessity at this point.

  • QuizzicalQuizzical Member LegendaryPosts: 25,355
    Originally posted by Cleffy

    lol should have worded it better since it is not more effecient in current workloads.  My main argument is that current software poorly utilizes AMDs current design despite what its architecture is capable of. The current generation intel chips are 1.4 billion transistors on a 22nm process including a GPU die with a TDP of 84 watts.  The current generation AMD chips are 1.2 billion transistors on a 32nm process not including a GPU die with a TDP of 125 watts.  From a fundamental standpoint a chip with more transistors clocked higher will perform better.  So why isn't AMD?  Its clocked higher and if you negate Intels GPU, it has more transistors.
    I really don't want to go indepth into the size of L1 caches and operations per cycle possible as a result.  This is not my strong point and I don't do much research into this.  What I can say and you will most likely agree on is that in single threaded applications, Intel will perform significantly better.  In multi-threaded applications, and those that are scaled to more than 4 threads, AMD is better.

    I think when you boil it down, single-threaded applications typically don't require much computational power otherwise they would be multi-threaded.  So in the cases of single threaded applications, AMD will do the job.  This is unless 6 seconds compressing a video makes a difference to you.  Yet, in applications that need the threads AMD has an advantage in this sector without factoring in their 2 integer cores per module or FMA instruction set.  To me AMD has a much more forward looking design that just is not being utilized much outside of niche businesses.

    Also from rumors, Jaguar will perform 30% better than Piledriver in single threaded apps compared to the 10% increase going from Ivybridge to haswell.

    In cost of production terms, the proper measure of efficiency is performance per mm^2 of die space, not performance per transistor.  Some things have more transistors per mm^2 than others, even on exactly the same process node.  But your production cost from a foundry is per wafer, and the number of dies you can fit in a single wafer is limited by die size, not transistor count.

    Now, you can argue that Intel has an advantage because they're on a 22 nm process node.  So as I said, compare Vishera on 32 nm to Sandy Bridge-E on 32 nm.  Intel wins pretty clearly there, even in programs that scale well to arbitrarily many cores.

    Higher clock speeds don't automatically mean that a processor should be faster if you're comparing different architectures.  A deeper pipeline typically allows you to have higher clock speeds at the expense of not doing as much per clock cycle.  That's how it plays out when comparing Piledriver cores to Ivy Bridge cores (though the deeper pipeline is not Piledriver's only problem, and might not even be a problem at all).

    It's not an AMD versus Intel thing; some years ago, AMD's Athlon 64 was faster than Intel's Pentium 4 in spite of being clocked lower.  The same thing happened when you put two of the cores on a chip for an Athlon 64 X2 versus a Pentium D.  Or to take a more modern example, AMD Bobcat cores are much faster than Intel Atom cores in spite of being clocked lower.

    Much of that is because Bobcat cores are out-of-order while Atom cores are in-order.  This means that Atom cores have to execute instructions in the order that they come in, and have to stop and wait for a while if something needs data from system memory.  Bobcat cores are able to reorder instructions and set aside something that doesn't have the needed data available yet in order to see if anything else is ready and execute that instead.  That's a big deal, but like many other things in computer chip design, it comes at a cost, in this case, higher power consumption.  Atom cores can go in cell phones and Bobcat cores can't.

    Ivy Bridge also benefits from much lower cache latencies than Piledriver; Ivy Bridge's L3 cache has about the same latency as Piledriver's L2, and only about 1/3 that of Piledriver's L3 cache.  Piledriver cores have a substantial bottleneck in the scheduler, and Ivy Bridge cores don't.  AMD says that Steamroller cores will get some major gains over Piledriver by improving the branch predictor; my guess is that Ivy Bridge's branch predictor is already a lot better than Piledriver's, though I don't know that for certain.  There are a lot of little things that go into CPU architectures and affect performance per clock cycle, many of which probably never make it into the tech media.

    -----

    Jaguar cores most certainly are not going to be faster than Piledriver cores, even in single-threaded applications.  Whoever started that rumor is simply clueless.  It's about as plausible as predicting that the next generation Atom cores will be faster than Ivy Bridge.  Not going to happen, as Jaguar and Atom are targeted at a much lower performance level right from the start.

    AMD has already said that they expect Jaguar to beat Bobcat in IPC by about 15%.  That's not enough for it to catch Piledriver.  Meanwhile, recently announced G-series SoCs put the top clock speed of Jaguar at 2 GHz:

    http://www.mmorpg.com/discussion2.cfm/thread/383276/AMD-announces-Gseries-embedded-SoCs-hints-at-Kabini-PS4-Xbox-720-specs.html

    And I was surprised to see it go that high, even.  I was expecting around 1.7 or 1.8 GHz.  So Jaguar will typically have lower IPC than Piledriver and less than half of the clock speed.  That's not a way to win at single-threaded performance.

  • CaldrinCaldrin Member UncommonPosts: 4,505
    Originally posted by Horusra
    I do not think I will ever buy an AMD again.  I got one now after only using NVIDIA and I hate it.  Issues with games.  Display drivers failing.  Look at the site and see it is a common problem that is suppose to be fixed...never get fixed.  I will pay the extra for Nvidia from now on.  I might have a dud for a card, but it has turned me off from the company.

    See i never had any of those issues when i was using 2 x 6850s.. the drivers where solid and the cards worked a treat..

    Tho i must admit i currently run a Geforce 670.. but mainly because i had an awesome deal on it.

  • Pixel_JockeyPixel_Jockey Member Posts: 165
    I think sometimes people spend way too much on GPUs for no reason. It all boils down to how many monitors are you running and are you doing extended display in games with said monitors. I have a 6950 ($200) and play on 1 monitor @ 1920x1080. I can run virtually almost all my games (new games included) at max settings and keep 40+ fps. For me personally, a 7990 would do next to nothing for me and would be a complete waste of money. If I was running a 3 monitor extended display to game in at a very high resoultion, then yes I would have a need for a high end card. 
  • QuizzicalQuizzical Member LegendaryPosts: 25,355
    Originally posted by Ridelynn

     


    Originally posted by Cleffy
    What I can say and you will most likely agree on is that in single threaded applications, Intel will perform significantly better.  In multi-threaded applications, and those that are scaled to more than 4 threads, AMD is better.

     It would take very little for Intel to start offering more physical cores if/when the market leans more in that direction: they already have dies that have competing numbers of physical cores in the Xeon and E series, they just charge a premium for them because it's hardly a mainstream necessity at this point.

    Intel already offers some 10-core processors:

    http://ark.intel.com/products/53573/Intel-Xeon-Processor-E7-2850-24M-Cache-2_00-GHz-6_40-GTs-Intel-QPI

    As best as I can tell, that's the cheapest of them.  Yes, higher bins actually cost more than that.  Ivy Bridge-EX is rumored to go up to 15 cores.

    But yes, it's not just that it's not a mainstream necessity.  It's also that they're very expensive to produce.  10 cores and 30 MB of L3 cache on a 32 nm process node leads to an enormous die size of 513 mm^2.  Try to launch that on a then-cutting edge process node and you're asking for yield problems.  The last time AMD made a chip with a die that big was, well, probably never.

    For what it's worth, AMD's take on this is, 8 processor servers have 0.1% of the market, so Intel can have it to themselves.  Westmere-EX does offer variants for 2- or 4- processor servers, but there's not much sense in shelling out for Xeon E7 chips if you've got a workload that Xeon E5 (basically the server version of Sandy Bridge-E) can handle just fine.

  • DrakynnDrakynn Member Posts: 2,030

    OP brings up good points about the new ATI card at this price point but we'll see where the card actually ends up after a few months it will either find it's niche price or disappear.

    As to everything else being brought up here...IMO when you buy anything you do your research and buy what's the best value and bang for your buck at the price point you are at.Brand Loyalty is jsut stupidity because those companies don't return your loyalty or slavish love.

    As such I've used both Nvidia and ATI cards over the years and had my share of problems with,this will be shocking to fanbois on both sides, BOTH at different times.Each companies drivers have strengths and flaws as do their driver release schedules.I for one don't want to see a single company dominating a market ever.

  • ManioxManiox Member Posts: 25

    Quite sure ATI has the lead on nVidia in the GPU department outside the entire titan stuff that just seems terribly overkill and not worth all the money, but amd is down the drain compared to intel.

    Feels like its always nVidia with driver errors and bugs while ATI works smoothly.

    no i

  • QuizzicalQuizzical Member LegendaryPosts: 25,355
    Originally posted by Drakynn

    OP brings up good points about the new ATI card at this price point but we'll see where the card actually ends up after a few months it will either find it's niche price or disappear.

    As soon as 20 nm chips are ready, the Radeon HD 7990 becomes completely pointless to buy new.  Retailers will generally want to have sold out entirely before then, so the card could well disappear sooner rather than later.  A card with two high-bin Tahiti chips is never going to be cheap to produce.

  • fivorothfivoroth Member UncommonPosts: 3,916

    I agree that not a lot of people care. I never saw the appeal of buying the best GPUs (like GTX 690) or CPUs like that Intel's i7 CPU which was a total overkill for any sort of gaming (don't remember the name). Do people really spend so much money on a single GPU just to get the very best. You will most likely have to buy a new GPU in 2-3 years tops anyway so "futureproofing" seems a bit pointless as technology evolves so quickly.

    You know what is exciting though? The new Xbox which is being unveiled on the 21 May :)

     

    Originally posted by Maniox

    Feels like its always nVidia with driver errors and bugs while ATI works smoothly.

    From my personal experience, nVidia is much much better than ATI. I have never had any driver erros with my Nvidia cards. I have only once bought an ATI GPU and I was seriously dissappointed with all the bugs. Ever since I only buy Nvidia.

    Mission in life: Vanquish all MMORPG.com trolls - especially TESO, WOW and GW2 trolls.

  • ManioxManiox Member Posts: 25
    Originally posted by fivoroth

    I agree that not a lot of people care. I never saw the appeal of buying the best GPUs (like GTX 690) or CPUs like that Intel's i7 CPU which was a total overkill for any sort of gaming (don't remember the name). Do people really spend so much money on a single GPU just to get the very best. You will most likely have to buy a new GPU in 2-3 years tops anyway so "futureproofing" seems a bit pointless as technology evolves so quickly.

    You know what is exciting though? The new Xbox which is being unveiled on the 21 May :)

     

    Originally posted by Maniox

    Feels like its always nVidia with driver errors and bugs while ATI works smoothly.

    From my personal experience, nVidia is much much better than ATI. I have never had any driver erros with my Nvidia cards. I have only once bought an ATI GPU and I was seriously dissappointed with all the bugs. Ever since I only buy Nvidia.

    When I were playing BF2,BC2 and WoW etc I would very often get nVidia related problems, but nVidia has been infamous for hating the Battlefield genre with passion, but a little after BC2 was released I got my hands on a 5770 and later now a 7950 which has never given me any artifacts and such.

    And if you went from nVidia to ATI you probably had driver fragments remaining or something that might screw things up, but I guess it's decided by the rest of your system.

    no i

  • QuizzicalQuizzical Member LegendaryPosts: 25,355

    This thread perhaps demonstrates why the Radeon HD 7990 even exists:  it gets people talking about AMD having high end video cards, even though hardly anyone is going to buy it.  If someone else doesn't reply just before I post this, this will be post #67 about a thread for a card that hardly anyone will buy.  It's not a bad card in its own right like the GeForce GTX 590 was; just a card that makes sense for basically no one.

    Meanwhile, I start this thread about a chip that AMD will sell by the millions:

    http://www.mmorpg.com/discussion2.cfm/thread/383276/AMD-announces-Gseries-embedded-SoCs-hints-at-Kabini-PS4-Xbox-720-specs.html

    And then it only gets one reply from someone besides me.  Unlike the Radeon HD 7990, Kabini/Temash makes sense for a lot of people.  If you want a Windows-based tablet, this is the chip you want.  Period.  Nothing else is remotely competitive.  If you want an ultraportable laptop, this is probably the chip you want.  It's highly probable that it will be better than ULV Ivy Bridge and likely that it will be better than ULV Haswell for ultraportables.  It will certainly be massively cheaper than either of those, excluding crippled Celeron variants that are completely awful in addition to probably still being more expensive than Kabini.  If you want a cheap nettop, this is likely again the chip you want.

  • RidelynnRidelynn Member EpicPosts: 7,383


    Originally posted by Maniox
    Quite sure ATI has the lead on nVidia in the GPU department outside the entire titan stuff that just seems terribly overkill and not worth all the money, but amd is down the drain compared to intel.

    At the risk of stirring up another Xfire-type debate, Steam does publish hardware survey information:

    http://store.steampowered.com/hwsurvey?platform=pc

    Steam is just a microcosm of the PC community, so it's not all-encompassing. People running Steam tend to be gamers, so your missing out on a vast majority of the PCs (PCs used in businesses, your grandparents who don't game, secondary systems such as netbooks that can't game well, etc).

    But I figure, that in the context of hardware used for gaming, a survey done by Valve for hardware used while gaming, is probably indicative although not authoritative.

    March 2013 Data
    nVidia GPU: 52.26%
    AMD GPU: 33.72%
    Intel GPU: 13.57% (God help these poor souls)

    Intel CPU: 73.5%
    AMD CPU: 26.5%

    Single Core CPU: 4.4%
    Dual Core CPU: 48.37%
    Quad Core CPU: 42.58%

    Most popular:
    Windows 7 64-bit - 57.01% share
    8G System RAM - 22.78% share
    Single Monitor Resolution: 1920x1080 - 30.22%

  • RidelynnRidelynn Member EpicPosts: 7,383


    Originally posted by Quizzical
    Meanwhile, I start this thread about a chip that AMD will sell by the millions:

    http://www.mmorpg.com/discussion2.cfm/thread/383276/AMD-announces-Gseries-embedded-SoCs-hints-at-Kabini-PS4-Xbox-720-specs.html

    And then it only gets one reply from someone besides me.  


    My guess is because it's a CPU intended for ULV/Ultraportables, and this is a (mainly PC) gaming forum. It won't see a lot of use for MMOs, unless mainstream MMOs start to become ultraportable. And while the new PS4/Xbox will use a variant of this, they will almost certainly use heavily modified variants that will probably perform nothing like these particular SOCs.

    Whereas the 7990, while hardly practical, is aimed right at PC gamers (and maybe the bitcoin mining niche).

  • GroovyFlowerGroovyFlower Member Posts: 1,245

    One of the fastest singlecards in the world only pricetag is bit high.

    pro - heat-noise-consumption are ok for such card

    con - pricetag is high after so late release should be lower.

     

    Most tests dont include asus ares II card other wise that one would be fastests

  • SlampigSlampig Member UncommonPosts: 2,342
    Originally posted by Quizzical

    AMD today launched the Radeon HD 7990, which basically constitutes two Radeon HD 7970s on a single card.  It's a little faster than a GeForce GTX 690, but also uses a lot more power, so if you're limited by power, the latter is the better buy.  It's also more expensive than two Radeon HD 7970 GHz Edition cards while being slower, so if you're not sharply limited by power, two (or three!) 7970 GHz Edition cards make more sense.  If you want the fastest performance you can get in a single slot, the older unofficial 7990s from PowerColor and Asus are clocked higher, so the new 7990 is slower than those.  If you want max performance at any price, then two or three GeForce GTX Titans is the way to go, as that has considerably faster GPUs.

    So basically, AMD just launched a new card and I don't see any reason why anyone would want to get one.  But hey, it's finally here after being delayed by a year or so from the initial rumors.

    I wonder what card the OP is using...

    That Guild Wars 2 login screen knocked up my wife. Must be the second coming!

  • IAmMMOIAmMMO Member UncommonPosts: 1,462
    Originally posted by Mtibbs1989
    Originally posted by BlackLightz

    One step for mankind, one more step in the endless line of GFX cards with a minor upgrade.

     

    "Computer advancements double every 18 months." - Moore's Law

    Gaming advancement  has fallen behind  though on the Pc gaming exclusive front.  Pc exclusives are few are fare between to actually push these card. To play games today any  decent graphics card of the last 3  to 4 years will do along with any CPU of the last 3 years. Games have been made to fit PS3 and Xbox 360 tech and that's very old in tech terms. Pc gamers will have to wait for the Pc kick start projects to make it to release to push PC hardware of today.

  • Sk1ppeRSk1ppeR Member Posts: 511
    Originally posted by IAmMMO
    Originally posted by Mtibbs1989
    Originally posted by BlackLightz

    One step for mankind, one more step in the endless line of GFX cards with a minor upgrade.

     

    "Computer advancements double every 18 months." - Moore's Law

    Gaming advancement  has fallen behind  though on the Pc gaming exclusive front.  Pc exclusives are few are fare between to actually push these card. To play games today any  decent graphics card of the last 3  to 4 years will do along with any CPU of the last 3 years. Games have been made to fit PS3 and Xbox 360 tech and that's very old in tech terms. Pc gamers will have to wait for the Pc kick start projects to make it to release to push PC hardware of today.

    Crysis 3, nuff said.

  • QuizzicalQuizzical Member LegendaryPosts: 25,355
    Originally posted by Slampig
    Originally posted by Quizzical

    AMD today launched the Radeon HD 7990, which basically constitutes two Radeon HD 7970s on a single card.  It's a little faster than a GeForce GTX 690, but also uses a lot more power, so if you're limited by power, the latter is the better buy.  It's also more expensive than two Radeon HD 7970 GHz Edition cards while being slower, so if you're not sharply limited by power, two (or three!) 7970 GHz Edition cards make more sense.  If you want the fastest performance you can get in a single slot, the older unofficial 7990s from PowerColor and Asus are clocked higher, so the new 7990 is slower than those.  If you want max performance at any price, then two or three GeForce GTX Titans is the way to go, as that has considerably faster GPUs.

    So basically, AMD just launched a new card and I don't see any reason why anyone would want to get one.  But hey, it's finally here after being delayed by a year or so from the initial rumors.

    I wonder what card the OP is using...

    Radeon HD 5850

    At the time I bought it, it was the only DirectX 11 card in stock on New Egg.

  • ShakyMoShakyMo Member CommonPosts: 7,207
    Unless your using a dual monitor setup or 3d monitor

    80% of games you can play on an old card.
    17% of games will run at maximum levels on a 660gtx or 7870
    That 3% of cutting edge games will play on high or max settings on a 670gtx or 7950

    You don't need anything faster.
  • ShakyMoShakyMo Member CommonPosts: 7,207
    Pity crisis 3 is a boring cod style corridors and cutscenes duck behind cover manshoot.

    1 was good
    Warhead was even better
    2 was a boring console style shooter
    3 is even more so.
  • RidelynnRidelynn Member EpicPosts: 7,383


    Originally posted by Sk1ppeR
    Originally posted by IAmMMO Originally posted by Mtibbs1989 Originally posted by BlackLightz One step for mankind, one more step in the endless line of GFX cards with a minor upgrade.  
    "Computer advancements double every 18 months." - Moore's Law
    Gaming advancement  has fallen behind  though on the Pc gaming exclusive front.  Pc exclusives are few are fare between to actually push these card. To play games today any  decent graphics card of the last 3  to 4 years will do along with any CPU of the last 3 years. Games have been made to fit PS3 and Xbox 360 tech and that's very old in tech terms. Pc gamers will have to wait for the Pc kick start projects to make it to release to push PC hardware of today.
    Crysis 3, nuff said.

    While Crysis 3 will push some hardware, I would have pointed to Tomb Raider's TressFX.

    Although - even though we've come up with a couple of modern examples, IAmMMO's point is still very valid and I agree with it: PC titles by and large have been stagnate. We've been stuck in the DX9 Unreal3/Crytek2/3-type engines for years now, and I do think it's largely because these run on consoles well. There are newer generations coming out soon (tm), but we probably won't see a proflic number of titles with these until we have the engines running on the next-gen consoles. If there is any doubt to this, just look at DX10/11 adoption rate, and realize that it's partially because of Windows XP's lack of support, and partially because DX9-level tools exist for the current consoles, and they are technically unable of running many DX10/11 features.

    And I think another part of it is that we don't have mature development tools for the newer technologies, such as Tessellation, even on the PC. These require a good deal different skill set than most graphics designers and programmers are used to dealing with, and we won't see widespread adoption until the development tools catch up to allow these types of technology to be best applied by the people working with it. If you need a graphics designer that a) Is a good artist, and b) has a PhD in mathematics to effectively use tessellation (which is more or less the case now), then you have a very slim pool of potential applicants.

Sign In or Register to comment.