Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!

AMD Radeon HD 7990 launches. World doesn't care.

24

Comments

  • ShakyMoShakyMo Member CommonPosts: 7,207
    I'm the opposite, stuck to AMD with cards as I've had two nvidias burn out on me.

    The prices of nvidias in the UK don't help either. e.g. a 660gtx is more expensive than a 7950hd
  • MukeMuke Member RarePosts: 2,614
    Originally posted by Mtibbs1989
    Originally posted by Muke
    Originally posted by Mtibbs1989

     That extra throttle gives you the edge you need in combat when flying a fighter plane in BF3?

    I am one of the few not playing BF3 because frankly, I hate Cod and BF3...I rather play Doom/UT type sci fi games.

    Crawling up to a sniper and empty 2 pistol magazines into someone's back of the head, only to witness him standing up, turn around and kill me with 1 knife stroke wasn't my thing. :)

    But back to the graphics thing, I had my best experiences with ATI cards, every Nvidia-and other cards died on me rather quickly.

     

     I can't say that I've ever experienced a card dying on me. I don't have a fanboy preference either. I just find the one that's the best for my buck. However, assuming that the newest cards on the market are going to be the best thing in the world will be a wake up call for many people. Moore's law is an actual fact and computer technology doubles every 18 months. So that $1,000.00 USD you spent on the Titan card was essentially wasted; because there's going to be a card twice as good 18 months from now.

    You have to step in the line at one point, because that car you want to buy is cheaper next year and for that money you can buy a better car next year.

     

    "going into arguments with idiots is a lost cause, it requires you to stoop down to their level and you can't win"

  • The user and all related content has been deleted.

    image

    Somebody, somewhere has better skills as you have, more experience as you have, is smarter than you, has more friends as you do and can stay online longer. Just pray he's not out to get you.
  • QuizzicalQuizzical Member LegendaryPosts: 25,355

    This isn't even an AMD versus Nvidia issue.  Let's ignore Nvidia entirely for a moment.  Let's suppose that you want two top of the line Tahiti GPUs.  You can get a single Radeon HD 7990 for $1000.  Or for $900, you can get two of these:

    http://www.newegg.com/Product/Product.aspx?Item=N82E16814125439

    The latter will run faster because it's clocked substantially higher.  Having a lot more space to work with means you can get much better cooling for the latter, too.  And that's on top of it being cheaper.  So why would you get the 7990 again?

  • QuizzicalQuizzical Member LegendaryPosts: 25,355
    Originally posted by Mtibbs1989
    Originally posted by BlackLightz

    One step for mankind, one more step in the endless line of GFX cards with a minor upgrade.

     

    "Computer advancements double every 18 months." - Moore's Law

    Moore's Law loosely says that the number of transistors will double every two years.  And over the course of the last 50 years, that's held fairly accurate to a two year doubling, not 18 months.

    But just because you have twice as many transistors doesn't mean that you can double your performance.  Power is a big limiting factor, too, so if you want to stay within a fixed TDP, it takes about four years to double your performance.

  • The user and all related content has been deleted.

    image

    Somebody, somewhere has better skills as you have, more experience as you have, is smarter than you, has more friends as you do and can stay online longer. Just pray he's not out to get you.
  • RidelynnRidelynn Member EpicPosts: 7,383


    Originally posted by Mtibbs1989
    I can't say that I've ever experienced a card dying on me. I don't have a fanboy preference either. I just find the one that's the best for my buck. However, assuming that the newest cards on the market are going to be the best thing in the world will be a wake up call for many people. Moore's law is an actual fact and computer technology doubles every 18 months. So that $1,000.00 USD you spent on the Titan card was essentially wasted; because there's going to be a card twice as good 18 months from now.

    By this logic, you'd never buy anything, because 18 months later, something better will inevitably come out.

    People have, and will continue, to buy "top of the line", and continue to pay the privilege for having top of the line because:
    a) They have the money
    b) Either they need or desire that level of performance and there is no other alternative

    Sure, you can wait 18 months, but you'll always be waiting 18 months for something better. If you want a product, you get the product that best fits your needs/desires, and fits your budget - sometimes that means making a compromise in one area or another, but for people with no effective budget constraints - the high profit margins on those cards are what helps to fuel the R&D for the next generation of cards, and that technology eventually winds it's way down from the "bleeding edge enthusiast with money" niche into consumer and mainstream level products.

    There are people who can use as much power as a 7990, or CFX7970's, or SLI Titans can put out. There aren't many of them, and there are probably more people who wish they could use that much power but can't actually afford it, but there are people out there with the need and the money, that niche does exist. Just because you don't happen to be inside that niche doesn't eliminate it.

  • uofa13luofa13l Member Posts: 29

    In the AMD vs Intel debate I get frustrated when technicals come up because appearently not everyone understands that a CPU's clock is not the dominating factor they think it is when it comes to overall CPU performance (Dont get me started on RAM clocks). Ask any independent electrical enginner that works in electronics to compare and contrast intel and amd BGA architecture and they will point out that AMD far surpasses Intel in this regard (while intel continuously churns out better clock speeds).

     

    This directly correlates to the "gamers" obsession with Intel CPU's and overclocking. You can not run intel chips near their "voltage threshold" or "ceiling" without the chip quickly degrading over time. Therefore Intel reduces the preprogrammed voltage it pulls to lengthen the life of the part to meet requirements. That leaves the average computer builder/overclocker to see a huge band of unused voltage available for overclocking. Likewise AMD uses well thought out architecture (less heat generation) and can set its default voltage use to a value much closer to the voltage threshold.

     

    Another thing to keep in mind when building a system is that if you go cutting edge everything you will get the performance of a slighlty above mid range PC (unless you have a lot of money to spend on some intense cooling solutions). Heat robs more people of their performance than they realize. I had two roomates a while back who built two similar systems. One was built to go SLi and the other was a single NVidia (same cards though). Amazingly enough the first game played saw better performance in the single card system. Now a big part of it was the game was not optimized for SLi or Crosslink, but the other major factor was the motherboard manufacturer put the two PCI slots for dual graphics cards dangerously close to each other (I swear there was only a 50-100 mil gap between the bottom of one card and the fan on the other card) which lead to massive heat saturation in the dual card set-up.

     

    Sorry I got a little sidetracked and had more to say but I will cut it short and summarize it as this; I personally like AMD/ATI much more than Intel/NVidia because as an engineer I have much more appreciation for the hardware they produce. On the flip side I always analyze every situation independently. While I have had mostly AMD/ATI systems for the past 13 years my current system is Intel/NVidia because at the time I was shopping it was the better perfomance per dollar. Whenever you blindly follow an electronics company a CEO gets its wings because you just decided to make that 3 lb mass of grey matter (largely considered the greatest known creation in universe) useless.

  • The user and all related content has been deleted.

    image

    Somebody, somewhere has better skills as you have, more experience as you have, is smarter than you, has more friends as you do and can stay online longer. Just pray he's not out to get you.
  • QuizzicalQuizzical Member LegendaryPosts: 25,355
    Originally posted by uofa13l

    In the AMD vs Intel debate I get frustrated when technicals come up because appearently not everyone understands that a CPU's clock is not the dominating factor they think it is when it comes to overall CPU performance (Dont get me started on RAM clocks). Ask any independent electrical enginner that works in electronics to compare and contrast intel and amd BGA architecture and they will point out that AMD far surpasses Intel in this regard (while intel continuously churns out better clock speeds).

    That hasn't been true since Conroe launched way back in 2006.  Intel has had the top performing CPU ever since then, even though AMD has sometimes had higher clock speeds.

  • RidelynnRidelynn Member EpicPosts: 7,383

    To mince words here, it's not a matter of price at all. As you've clearly stated that you have money, and that you'd rather save that money than spend it....

    So if by waiting 18 months, the price gets cut in half for Y level of performance
    And in 18 months, cut in half again, for Y level of performance
    And in 18 months, cut in half again, for Y level of performance

    Here we are, looking 4.5 years down the road, at the same level of performance.

    What cost $X dollars to begin with, now only costs $X/4, and in another 4.5 years will cost $X/16. But at the same time, newer technology can now provide a performance level of Y*(some factor).

    So if you can afford $X to begin with... the only other parts of the equation are time and performance.

    The real question is the ratio of $X/Y (bang for the buck, if you will)-- and of the minimum value of Y required to deliver the performance that you need or desire. And those are going to be subjective and different for everyone, based on budget and personal taste. There are those people out there for which the maximum value of Y will never be enough, and they are willing to pay for as much as they can get. You are clearly not one of those people, and that's OK - but your going about inferring that all those people are somehow inferior because they aren't looking at $X/Y, but I'm pointing out that because those people are willing to spend $X on Day 1, they are making sure that Y keeps increasing over time, rather than just ending up some stagnate "Good Enough" level which they could very easily have fell into.

  • miguksarammiguksaram Member UncommonPosts: 835
    Originally posted by Mtibbs1989

    Sure, you can wait 18 months, but you'll always be waiting 18 months for something better. If you want a product, you get the product that best fits your needs/desires, and fits your budget - sometimes that means making a compromise in one area or another, but for people with no effective budget constraints - the high profit margins on those cards are what helps to fuel the R&D for the next generation of cards, and that technology eventually winds it's way down from the "bleeding edge enthusiast with money" niche into consumer and mainstream level products.

     No, by this logic I'd wait for the product that you spent $1,000.00 USD on and simply pay a fraction of the price.

     I'm not waiting 18 months for the next best thing; that's not what I've said. I'm waiting 18 months for the next best thing to replace the original best thing. So that I can buy the original best thing for a substantially cheaper price.

    I also never stated that I couldn't afford the titan, GTX 690 or the 7990. I'm simply stating that I don't see the point in wasting exponentially more money than the product's actually worth. It's called buying smart and I'm sorry you don't think this way.

     Please go back and reread what I've written you obviously misunderstood what I've wrote.

    I'm a bit confused as to what you mean.  Are you stating you wait to buy the current bleeding edge at the inception of the next generation in order to save money?  If so how exactly do you (meaning anyone) do that if the portion of Ridelynn's post I left isn't true?  Specifically the part about somebody has to fuel R&D via the inflated costs of current bleeding edge tech.

    If you mean buying a used product 18 months down the road then your "savings" is in the form a product who's lifespan has most likely been cut down by the same amount of time.  If you are referring to buying a current bleeding edge card "new" 18 months down the road that is unlikely to be cheaper, in fact most "new" cards from older generations tend to go for MORE than they were when they released.

    If you mean you will buy the current bleeding edge tech in a more easily affordable next gen card then that only happens because new bleeding edge expensive cards have released thanks to R&D that was funded by the last generations expensive top of the line cards.  I trust you see the cycle that is being laid out.

     

    EDIT: Ultimately most who fund bleeding edge tech rarely actually need it, rather they just want it and can afford it (or find ways to).  

  • CleffyCleffy Member RarePosts: 6,412
    Its still true today a little bit.  AMDs architecture is more effecient, just not utilized.  There are actually some CPU applications where AMD is the better pick like Software 3D rendering.  This is with a CPU that is a full node behind the closest competition and costs less then any competition close to its results.
  • The user and all related content has been deleted.

    image

    Somebody, somewhere has better skills as you have, more experience as you have, is smarter than you, has more friends as you do and can stay online longer. Just pray he's not out to get you.
  • ShakyMoShakyMo Member CommonPosts: 7,207
    Originally posted by Quizzical

    This isn't even an AMD versus Nvidia issue.  Let's ignore Nvidia entirely for a moment.  Let's suppose that you want two top of the line Tahiti GPUs.  You can get a single Radeon HD 7990 for $1000.  Or for $900, you can get two of these:

    http://www.newegg.com/Product/Product.aspx?Item=N82E16814125439

    The latter will run faster because it's clocked substantially higher.  Having a lot more space to work with means you can get much better cooling for the latter, too.  And that's on top of it being cheaper.  So why would you get the 7990 again?

    yeah i get your point, super high end dual cards are amassive waste of money - as is the nvidia 690

    but... theres 3 reasons they make these cards

    1 the marketing kudos for having the fastest card (even if its 2 cards cobbled together like the 7990 and 690)

    2 there are whales who will buy these cards, because they are actually going to buy 2 of them and run effectively a quad setup, but then these are  the sort of people that can buy dual processer server boards and advanced water cooling.

    3 the world is full of idiots, idiots will buy them to have the fastest,. even though they could dual card 680s and 7970s, spend less and be faster or dual card 670s, 7950s and not get much far off for a quarter of the price.

  • QuizzicalQuizzical Member LegendaryPosts: 25,355
    Originally posted by Cleffy
    Its still true today a little bit.  AMDs architecture is more effecient, just not utilized.  There are actually some CPU applications where AMD is the better pick like Software 3D rendering.  This is with a CPU that is a full node behind the closest competition and costs less then any competition close to its results.

    No, AMD's CPU architecture certainly is not more efficient than Intel's.  Never mind Ivy Bridge.  Compare Sandy Bridge-E to Vishera if you like, as those are both 32 nm.  A Core i7-3960X beats an FX-8350 at just about everything, in spite of having a much smaller die and using about the same power.

    3D rendering is a corner case of, if you support an instruction before your competitor, then you win at applications that can spam that new instruction.  Piledriver cores support FMA and Ivy Bridge cores don't.  But that's a temporary advantage for AMD, as Haswell will support FMA.

    FMA is Fused Multiply Add.  What it does is to take three floating-point inputs, a, b, and c, and return a * b + c in a single step.  Obviously, it's trivial to do the same thing in two steps, by doing multiplication first and then addition.  But doing it in a single step means that you can do it twice as fast.

    FMA is hugely beneficial for dot products and computations that implicitly use dot products, such as matrix multiplication.  3D graphics uses a ton of matrix multiplication, as rotating a model basically means multiplying every single vertex by some particular orthogonal matrix.

    But what else can make extensive use of FMA outside of 3D graphics?  Not much, really.  Some programs can use it here and there.  Doing the same thing for integer computations isn't supported at all.  That's why video cards have long supported FMA (any rated GFLOPS number you see is "if the video card can spam FMA and do nothing else"), but x86 CPUs didn't support it until recently.

    Corner cases from new instructions happen from time to time.  When Clarkdale launched, a Core i5 dual core absolutely destroyed everything else on the market in AES encryption, even if threaded to scale to many cores, because it supported AES-NI and nothing else did.  But that advantage only applied to AES, and now recent AMD CPUs support AES-NI, too.  So now it's a case of Core i3, Pentium, and Celeron processors are awful at AES because Intel disables AES-NI on them, while all other recent desktop processors are fast at it.

  • MikehaMikeha Member EpicPosts: 9,196
    Originally posted by Cleffy
    Its still true today a little bit.  AMDs architecture is more effecient, just not utilized.  There are actually some CPU applications where AMD is the better pick like Software 3D rendering.  This is with a CPU that is a full node behind the closest competition and costs less then any competition close to its results.

     

    Totally agree but most people live in the world of paper stats.

  • TheLizardbonesTheLizardbones Member CommonPosts: 10,910


    Originally posted by Quizzical
    Originally posted by Cleffy Its still true today a little bit.  AMDs architecture is more effecient, just not utilized.  There are actually some CPU applications where AMD is the better pick like Software 3D rendering.  This is with a CPU that is a full node behind the closest competition and costs less then any competition close to its results.
    No, AMD's CPU architecture certainly is not more efficient than Intel's.  Never mind Ivy Bridge.  Compare Sandy Bridge-E to Vishera if you like, as those are both 32 nm.  A Core i7-3960X beats an FX-8350 at just about everything, in spite of having a much smaller die and using about the same power.

    3D rendering is a corner case of, if you support an instruction before your competitor, then you win at applications that can spam that new instruction.  Piledriver cores support FMA and Ivy Bridge cores don't.  But that's a temporary advantage for AMD, as Haswell will support FMA.

    FMA is Fused Multiply Add.  What it does is to take three floating-point inputs, a, b, and c, and return a * b + c in a single step.  Obviously, it's trivial to do the same thing in two steps, by doing multiplication first and then addition.  But doing it in a single step means that you can do it twice as fast.

    FMA is hugely beneficial for dot products and computations that implicitly use dot products, such as matrix multiplication.  3D graphics uses a ton of matrix multiplication, as rotating a model basically means multiplying every single vertex by some particular orthogonal matrix.

    But what else can make extensive use of FMA outside of 3D graphics?  Not much, really.  Some programs can use it here and there.  Doing the same thing for integer computations isn't supported at all.  That's why video cards have long supported FMA (any rated GFLOPS number you see is "if the video card can spam FMA and do nothing else"), but x86 CPUs didn't support it until recently.

    Corner cases from new instructions happen from time to time.  When Clarkdale launched, a Core i5 dual core absolutely destroyed everything else on the market in AES encryption, even if threaded to scale to many cores, because it supported AES-NI and nothing else did.  But that advantage only applied to AES, and now recent AMD CPUs support AES-NI, too.  So now it's a case of Core i3, Pentium, and Celeron processors are awful at AES because Intel disables AES-NI on them, while all other recent desktop processors are fast at it.




    I would just like to point out that on these forums the 3D applications of a cpu are of primary importance. If FMA makes 3D applications faster, and applications such as MMORPG can take advantage of it, especially if they don't have to be recompiled, then it's a primary reason to look at a cpu that has FMA or uses FMA.

    To that end though, if you just look at the final result, Intel seems to come out on top, both performance wise and price for performance wise. I think even performance for the power consumed they perform better. I could be wrong about recent history, but that's been the case for awhile. The only thing that AMD gets the consumer is less expensive systems. That's not a bad thing, but we don't seem to be talking about overall cost here, just performance.

    So whether AMD's architecture just isn't utilized, or whether AMD's architecture is less efficient it doesn't really matter because in the real world Intel gives a better end result. Even if that better stuff is talking developers into using their stuff, it's still a better end result for the user.

    Right now. :-)

    I can not remember winning or losing a single debate on the internet.

  • QuizzicalQuizzical Member LegendaryPosts: 25,355
    Originally posted by lizardbones

     


    Originally posted by Quizzical

    Originally posted by Cleffy Its still true today a little bit.  AMDs architecture is more effecient, just not utilized.  There are actually some CPU applications where AMD is the better pick like Software 3D rendering.  This is with a CPU that is a full node behind the closest competition and costs less then any competition close to its results.
    No, AMD's CPU architecture certainly is not more efficient than Intel's.  Never mind Ivy Bridge.  Compare Sandy Bridge-E to Vishera if you like, as those are both 32 nm.  A Core i7-3960X beats an FX-8350 at just about everything, in spite of having a much smaller die and using about the same power.

     

    3D rendering is a corner case of, if you support an instruction before your competitor, then you win at applications that can spam that new instruction.  Piledriver cores support FMA and Ivy Bridge cores don't.  But that's a temporary advantage for AMD, as Haswell will support FMA.

    FMA is Fused Multiply Add.  What it does is to take three floating-point inputs, a, b, and c, and return a * b + c in a single step.  Obviously, it's trivial to do the same thing in two steps, by doing multiplication first and then addition.  But doing it in a single step means that you can do it twice as fast.

    FMA is hugely beneficial for dot products and computations that implicitly use dot products, such as matrix multiplication.  3D graphics uses a ton of matrix multiplication, as rotating a model basically means multiplying every single vertex by some particular orthogonal matrix.

    But what else can make extensive use of FMA outside of 3D graphics?  Not much, really.  Some programs can use it here and there.  Doing the same thing for integer computations isn't supported at all.  That's why video cards have long supported FMA (any rated GFLOPS number you see is "if the video card can spam FMA and do nothing else"), but x86 CPUs didn't support it until recently.

    Corner cases from new instructions happen from time to time.  When Clarkdale launched, a Core i5 dual core absolutely destroyed everything else on the market in AES encryption, even if threaded to scale to many cores, because it supported AES-NI and nothing else did.  But that advantage only applied to AES, and now recent AMD CPUs support AES-NI, too.  So now it's a case of Core i3, Pentium, and Celeron processors are awful at AES because Intel disables AES-NI on them, while all other recent desktop processors are fast at it.



    I would just like to point out that on these forums the 3D applications of a cpu are of primary importance. If FMA makes 3D applications faster, and applications such as MMORPG can take advantage of it, especially if they don't have to be recompiled, then it's a primary reason to look at a cpu that has FMA or uses FMA.

    To that end though, if you just look at the final result, Intel seems to come out on top, both performance wise and price for performance wise. I think even performance for the power consumed they perform better. I could be wrong about recent history, but that's been the case for awhile. The only thing that AMD gets the consumer is less expensive systems. That's not a bad thing, but we don't seem to be talking about overall cost here, just performance.

    So whether AMD's architecture just isn't utilized, or whether AMD's architecture is less efficient it doesn't really matter because in the real world Intel gives a better end result. Even if that better stuff is talking developers into using their stuff, it's still a better end result for the user.

    Right now. :-)

     

    Software 3D renderers aren't relevant to gaming outside of some really weird game engines that don't try to offload any work to the video card.  If an FX-8350 could get you 3 frames per second and a Core i7-3770K could only get you 2 in a software renderer, but a Radeon HD 7750 or GeForce GTX 650 paired with a more modest CPU could get you 50 frames per second in the same game at the same settings, is that really just an important win for AMD over Intel?

    The CPU side of a game engine wouldn't heavily use FMA the way that the GPU side would.  You don't rotate or otherwise transform vertices CPU-side.  That's what video cards are for.

  • KenFisherKenFisher Member UncommonPosts: 5,035
    All that power consumption goes somewhere.  I bet it cranks out the heat.

    Ken Fisher - Semi retired old fart Network Administrator, now working in Network Security.  I don't Forum PVP.  If you feel I've attacked you, it was probably by accident.  When I don't understand, I ask.  Such is not intended as criticism.
  • BookahBookah Member UncommonPosts: 260

    As with all new Top of the line cards now is NOT the time to buy this card.

    As for amd i have had an awesome experience with there gpu's (I would never buy this card, i go for there cool effiecinet versions.)

    My wife and I (and my 2 good gaming friends.) all are using 5770's for a couple years now and there still a kickass, efficient card.

    I think I bough mine in 2009 and i cans till play pretty much anything with it @ 1900 x 1200 not bad for 100$ 4 years ago.

    After dealing with nvidia drivers & cards for a few years that was quiet enough.

     

    GO amd!

    image
  • IcewhiteIcewhite Member Posts: 6,403
    Originally posted by Mtibbs1989
    Originally posted by BlackLightz

    One step for mankind, one more step in the endless line of GFX cards with a minor upgrade.

     

    "Computer advancements double every 18 months." - Moore's Law

    There are salesmen who rely on your regular purchases of new GPUs. "It's got a biggah numbah, must buy."

    Self-pity imprisons us in the walls of our own self-absorption. The whole world shrinks down to the size of our problem, and the more we dwell on it, the smaller we are and the larger the problem seems to grow.

  • Darkness690Darkness690 Member Posts: 174
    Originally posted by Mtibbs1989
    Originally posted by Quizzical

    AMD today launched the Radeon HD 7990, which basically constitutes two Radeon HD 7970s on a single card.  It's a little faster than a GeForce GTX 690, but also uses a lot more power, so if you're limited by power, the latter is the better buy.  It's also more expensive than two Radeon HD 7970 GHz Edition cards while being slower, so if you're not sharply limited by power, two (or three!) 7970 GHz Edition cards make more sense.  If you want the fastest performance you can get in a single slot, the older unofficial 7990s from PowerColor and Asus are clocked higher, so the new 7990 is slower than those.  If you want max performance at any price, then two or three GeForce GTX Titans is the way to go, as that has considerably faster GPUs.

    So basically, AMD just launched a new card and I don't see any reason why anyone would want to get one.  But hey, it's finally here after being delayed by a year or so from the initial rumors.

     The Titan isn't a card that's "better" than the GTX 690. They've stated this a few times already. 

    “You’re going to see some people who just say, "I want maximum frame rate" and if you want maximum frame rate, GTX 690 is you.” - Nvidia’s Tom Petersen

    "If you want the best experience, if you want the best acoustics, then Titan is for you.”  - Nvidia’s Tom Petersen

    Don't argue otherwise, this is directly from Nvidia themselves.

    The Titan isn't better as a single card, but it has a faster GPU. If you have an unlimited budget, 4 GTX Titans is the way to go. Remember, the GTX 690 is a dual GPU card so you're only limited to having two cards.

  • TheLizardbonesTheLizardbones Member CommonPosts: 10,910


    Originally posted by Quizzical
    Software 3D renderers aren't relevant to gaming outside of some really weird game engines that don't try to offload any work to the video card.  If an FX-8350 could get you 3 frames per second and a Core i7-3770K could only get you 2 in a software renderer, but a Radeon HD 7750 or GeForce GTX 650 paired with a more modest CPU could get you 50 frames per second in the same game at the same settings, is that really just an important win for AMD over Intel?The CPU side of a game engine wouldn't heavily use FMA the way that the GPU side would.  You don't rotate or otherwise transform vertices CPU-side.  That's what video cards are for.

    I think you've highlighted what I think I'm saying. All of these in depth details* are interesting to discuss, but if you're talking about what's better, then however many frames per second you get in the games you like to play, and how much those frames cost you in dollars and watts are all that's really important. Those details are important from a PR point of view I suppose, but not so much if nobody outside of people who research hardware understand it.

    I am totally ignorant of those details. Yes, I've learned how matrix math works and even how it applies to 3d graphics, but the inner details of cpus and gpus are not only a mystery, they are a mystery I'm not going to look into. However, when I'm going to upgrade my system, none of that is really going to matter. I will find out which parts and which combination of parts is going to get me the performance I'm willing to pay for. That's what really matters. In that context, up to this point Intel has been the winner since I'm willing to spend a little more money and frankly it's been a toss up between AMD and Nvidia. Sometimes it's one and sometimes it's the other. It depends a lot on what NewEgg has on sale.

    [i]* I had to infer what FMA was from the conversation. Before this thread, I had never heard of it. After reading the posts, I had assumed the point of having FMA would be that it's something that you can't do on a gpu or that it would be faster to do it on the cpu. Your comments about software 3d rendering makes me think that FMA is like a vestigial tail. Interesting and maybe you can do neat stuff with it, but if you didn't have it, it wouldn't matter.[i]

    I can not remember winning or losing a single debate on the internet.

  • QuizzicalQuizzical Member LegendaryPosts: 25,355
    It's not that FMA is useless on a CPU.  It's just that it isn't that important on a CPU, unlike a GPU, where it's critical.  But as Moore's Law provides more and more transistors for CPUs to spend on a single core, AMD and Intel try to come up with new instructions that will be useful here and there.
Sign In or Register to comment.