Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!

Video card explanation?

WrenderWrender Member Posts: 1,386

Was just wondering if someone could explain to me what the difference is really between an nvidea card and a Radeon card is. I know Nvidea used to be the leader in performance but from what I understand Radeon has now reined supreme. Why is this true? I have a friend that is a raging GeForce fanboy and I have been trying to convince him he should go for an ATI card in his new computer but even though I know it's true I cannot find the words to convince him? Can someone enlighten me? Quizzical?

Comments

  • terrantterrant Member Posts: 1,683

    Originally posted by Wrender

    Was just wondering if someone could explain to me what the difference is really between an nvidea card and a Radeon card is. I know Nvidea used to be the leader in performance but from what I understand Radeon has now reined supreme. Why is this true? I have a friend that is a raging GeForce fanboy and I have been trying to convince him he should go for an ATI card in his new computer but even though I know it's true I cannot find the words to convince him? Can someone enlighten me? Quizzical?

    The real difference?

     

    They are two different manufacturers.

     

    Now, are there others? Sure. NVidia's $200 card (arbitrary number) might perform better on games with certain types of graphical content than ATI's (the makers of the Radeon series) comparable model. Cards of similar power might be different priced. One card might handle certain types of lighting better, another might be faster for certain specifc applications. Many game developers optimize their games to work better with whichever manufacturer partners with them. Mobos sometimes prefer one card over another.

     

    BUT.

     

    Unless you're really, really hyper about getting the top performance for x game on x system, which one you get really is up to personal preference. I think ATI has worked better for me over the years; but that's just me.

  • QuizzicalQuizzical Member LegendaryPosts: 25,355

    Radeon is the brand name for consumer video cards based on an AMD GPU chip.  GeForce is the brand name for consumer video cards based on an Nvidia GPU chip.

    In each generation, AMD and Nvidia have a bunch of graphics chips ranging from low end stuff that really isn't what you want for gaming up to high end stuff that you'd better have a capable case and power supply for if you don't want anything to fry.

    In desktops, consumers mainly care about performance per dollar, or more to the point, who will sell you the fastest card that fits your budget.  The last three generations, AMD has had a large advantage in performance per mm^2 of die size, which means that it cost AMD substantially less to build a card with a given level of performance than it cost Nvidia to build an equivalent card.  Once Nvidia has their entire Kepler lineup out, they should roughly catch up to AMD in that metric.

    In laptops, performance per watt is also a very big deal.  Better performance per watt means that you can get a given level of performance with less heat output.  Since gaming laptops are fundamentally about putting too much heat into too little space and trying to make it work, less heat output for a given level of performance is a big deal.  AMD has had a large advantage over Nvidia in this the last two generations.  Once Nvidia has their entire Kepler lineup out, they should roughly catch up to AMD in this, too.

    Performance per watt matters a little in desktops, but not very much, unless you're trying to grab more graphical power into a case than it was really meant to handle.  This could be due to either trying to upgrade an older computer with a cheap junk case, or going for unreasonably large amounts of power consumption in SLI/CrossFire setups, especially 3- and 4-way.

    There are also feature differences.  AMD and Nvidia both offer stereoscopic 3D, but Nvidia has been working harder at it for longer, so if you're going to use stereoscopic 3D, that's a good reason to favor Nvidia.  AMD and Nvidia both offer the ability to spread a game window across multiple monitors, but AMD has been working harder at it for logner, so if you want to use several monitors, that's a good reason to favor AMD.  If you really love the fancy graphical effects in the couple of games per year or so that use GPU PhysX, that's a reason to favor Nvidia.  If you do GPU compute stuff, many such programs very strongly favor Nvidia or AMD (e.g., bitcoin mining very strongly favors AMD's architectures), so you go with whichever vendor's architecture works better with the one particular program that you really need.  Most gamers don't need any of the vendor-specific features, however.

    Right now, AMD has their entire Southern Islands lineup out.  Nvidia's competitor to it is Kepler, and they've paper launched a single card (retailers are allow to sell it if they have it, which they don't) and that's it.  So if you get an Nvidia card, you're getting an older, previous generation card.  That will change in coming months as Kepler shows up.

    If you shop purely on price/performance, then Nvidia basically doesn't have anything competitive to offer in desktops under $200.  The typical situation is that you can get an Nvidia card, or you can get an equivalent AMD card for $20 cheaper.  Above that, they're more competitive, so long as you don't care much about the greatly increased power consumption resulting from the older generation cards.

    In laptops, performance per watt is a huge deal, so you shouldn't get anything other than a Southern Islands or Kepler card.  Here, Nvidia's problem is that AMD has released all of their Southern Islands cards already, while Nvidia has only been able to trickle out a handful of cards based on their lowest end GPU in the Kepler lineup.  Of course, for a gaming laptop, you also want either Ivy Bridge (which has had the NDA end, and retailers are just waiting for Intel's permission to sell parts) or Trinity (rumored to launch May 15).  So you don't want to buy a gaming laptop right now, but that will change within days.

    At the high end in gaming laptops (loosely, $1500+), no Kepler means Nvidia has nothing to offer.  A Radeon HD 7970M is the only game in town, so if you buy Nvidia, you'll guaranteed to get a vastly worse product than you could get from AMD.  That will change in coming months once Nvidia gets GK106 out, or maybe tries to cram GK104 into a laptop.

    For mid-range gaming laptops (~$1000) you can reasonably go with either AMD's Cape Verde or Nvidia's GK107.  I expect laptops with an Intel Ivy Bridge processor will use GK107 more commonly than Cape Verde, and laptops with an AMD Trinity processor will rarely to never include Nvidia graphics.

    For budget gaming laptops, you want AMD integrated graphics, period.  Intel integrated graphics are terrible.  No x86 license means Nvidia is locked out of this market entirely.  You could perhaps get a relatively cheap discrete AMD card as well, such as the Radeon HD 7690M (a rebranded 6770M, which is a previous generation product) that HP sells in their Llano laptops.  But AMD integrated graphics won't play nicely with a discrete Nvidia card (because neither AMD nor Nvidia has any incentive to put in the work to make them work together), so if you get an AMD processor in a laptop, you want AMD graphics, too.

    -----

    If you're an Nvidia fanboy, one approach is to buy an Nvidia product anyway, and just accept that you're getting a markedly inferior product in the market segments where they're behind.  Another is to decide that you don't need a new computer today, and wait a few months for Nvidia to be competitive again.

  • Loke666Loke666 Member EpicPosts: 21,441

    Your buddie will probably get a better deal on the GFX card if he waits a little while, now is not the best time to buy a card.

    Once Nvidias 600 serie is out it will probably drop AMDs prices a little too so no matter what brand he prefer waiting is the best choice unless he can get a great deal on a sale.

  • QuizzicalQuizzical Member LegendaryPosts: 25,355

    Originally posted by Loke666

    Your buddie will probably get a better deal on the GFX card if he waits a little while, now is not the best time to buy a card.

    Once Nvidias 600 serie is out it will probably drop AMDs prices a little too so no matter what brand he prefer waiting is the best choice unless he can get a great deal on a sale.

    AMD's price cuts have already happened:

    http://www.mmorpg.com/discussion2.cfm/thread/347270/AMD-cuts-prices-on-the-overpriced-half-of-their-Southern-Islands-lineup.html

    If you're looking for Nvidia to undercut AMD prices and force AMD to respond, when's the last time that ever happened with sub-$500 cards?

    I suppose that you could argue that the GTX 460 filled a hole in AMD's lineup, which made it a good value at launch if that was your budget, but it was in line with the prices on AMD's other cards, and merely hit a performance level where AMD didn't have a competitor other than low-volume severe salvage parts.  But that didn't force price cuts on a Radeon HD 5770 or 5850, the cards that the GTX 460 was between.

    And before that?  It's been long enough that I don't even remember.  You'd surely have to go back to when Nvidia was using four digit numbers on their cards rather than three digits.

    Now, if you're arguing that Nvidia will launch cards at prices in line with AMD's, and AMD will then cut prices to undercut Nvidia, I don't see that happening for at least a few months.  There's no sense in undercutting Nvidia prices if they don't have very many cards to sell.  Eventually Nvidia will be competitive, and cards may then start their slow decline in prices, but that's months away.

  • TrionicusTrionicus Member UncommonPosts: 498

    Originally posted by Quizzical

    Originally posted by Loke666

    Your buddie will probably get a better deal on the GFX card if he waits a little while, now is not the best time to buy a card.

    Once Nvidias 600 serie is out it will probably drop AMDs prices a little too so no matter what brand he prefer waiting is the best choice unless he can get a great deal on a sale.

    AMD's price cuts have already happened:

    http://www.mmorpg.com/discussion2.cfm/thread/347270/AMD-cuts-prices-on-the-overpriced-half-of-their-Southern-Islands-lineup.html

    If you're looking for Nvidia to undercut AMD prices and force AMD to respond, when's the last time that ever happened with sub-$500 cards?

    I suppose that you could argue that the GTX 460 filled a hole in AMD's lineup, which made it a good value at launch if that was your budget, but it was in line with the prices on AMD's other cards, and merely hit a performance level where AMD didn't have a competitor other than low-volume severe salvage parts.  But that didn't force price cuts on a Radeon HD 5770 or 5850, the cards that the GTX 460 was between.

    And before that?  It's been long enough that I don't even remember.  You'd surely have to go back to when Nvidia was using four digit numbers on their cards rather than three digits.

    Now, if you're arguing that Nvidia will launch cards at prices in line with AMD's, and AMD will then cut prices to undercut Nvidia, I don't see that happening for at least a few months.  There's no sense in undercutting Nvidia prices if they don't have very many cards to sell.  Eventually Nvidia will be competitive, and cards may then start their slow decline in prices, but that's months away.

    Let me see if I understand this. There is no reason to buy Nvidia over AMD unless I want S3D or some rare eye candy PhysX?

  • QuizzicalQuizzical Member LegendaryPosts: 25,355

    Originally posted by Trionicus

    Let me see if I understand this. There is no reason to buy Nvidia over AMD unless I want S3D or some rare eye candy PhysX?

    Not quite.  On a sub-$200 budget, there's no good reason to buy Nvidia for gaming purposes today in the US unless you find an unusually good deal or consider fanboydom to be a good reason.  That's a lot of caveats that I should address, however.

    1)  Unless you find an unusually good deal.  If you find a GeForce GTX 550 Ti for $80 including shipping and without rebates, then sure, buy it if that's your desired performance level.  But you won't find a GTX 550 Ti for $80 very often if ever; they're usually $120 and up.

    2)  In the US.  In different parts of the world, the options can be very different.  Someone was here recently wanting advice on choosing between the five particular video card SKUs available in his country.  When that happens, an Nvidia card could easily be the best value simply because there isn't a good price on an AMD card offered.

    3)  On a sub-$200 budget.  On a $500 budget, I'd recommend a GeForce GTX 680 for a single-monitor setup if you can find one in stock.  Even on somewhat smaller budgets, a GeForce GTX 560 Ti, GeForce GTX 570, and GeForce GTX 580 can sometimes be a good value for the money.  Nvidia has slashed prices on the latter two in order to clear inventory.  I'd only dismiss those cards out of hand if either you need newer or AMD-specific features or you're sensitive to power consumption.  AMD is still competitive in those prices segments with the Radeon HD 7850, 7870, 7950, and 7970, but lately, it's the sort of situation where you can often get about as good of a value for the money from either vendor.

    4)  Today.  The situation can and will change in the near future.  In particular, the release of the rest of Nvidia's Kepler lineup will probably make Nvidia eventually competitive in most price segments.  Unlike Fermi, Kepler cards shouldn't be outlandishly expensive to build, which is what made Nvidia's current lineup uncompetitive under $200.  It may take a few months after release for yields and volume to get to where it needs to be, but it should happen eventually.

    5)  For gaming purposes.  If all you need is a card that can display the desktop in Windows XP because your last one died, then you can pick up a very cheap card from a few generations ago from either AMD or Nvidia, or perhaps buy one used on Ebay.  If you need a professional graphics card (e.g., CAD purposes), then you go with whichever vendor's cards work best with the particular program you use--which is often but not always Nvidia.

    Note that I don't list stereoscopic 3D or GPU PhysX as good reasons to go Nvidia.  On a larger budget, sure, if that's what you're into.  But on a sub-$200 budget, you're probably not going to find a card that can do stereoscopic 3D right from either vendor, as you need a lot more graphical power to do that.  Recent games with GPU PhysX also tend to be demanding enough even without GPU PhysX that a sub-$200 card isn't going to get the job done if you're looking to max all of the fancy graphical effects.

  • TrionicusTrionicus Member UncommonPosts: 498
    Thanks for the clarification Quiz.
  • boikymarboikymar Member Posts: 60

    Originally posted by Loke666

    Your buddie will probably get a better deal on the GFX card if he waits a little while, now is not the best time to buy a card.

    Once Nvidias 600 serie is out it will probably drop AMDs prices a little too so no matter what brand he prefer waiting is the best choice unless he can get a great deal on a sale.

    Agreed that prices will only drop, but a 6870 for $150 or a 480 for $220 certainly aren't bad deals.

  • QuizzicalQuizzical Member LegendaryPosts: 25,355

    Originally posted by boikymar

    Originally posted by Loke666

    Your buddie will probably get a better deal on the GFX card if he waits a little while, now is not the best time to buy a card.

    Once Nvidias 600 serie is out it will probably drop AMDs prices a little too so no matter what brand he prefer waiting is the best choice unless he can get a great deal on a sale.

    Agreed that prices will only drop, but a 6870 for $150 or a 480 for $220 certainly aren't bad deals.

    Prices will drop only in the sense that prices almost always tend to drop as time passes.  When's the last time that a video card that wasn't a vendor's top of the line collapsed prices in its market segment?  Maybe a Radeon HD 5850 back in 2009, but even that was just a second bin of AMD's top of the line Cypress GPU.  Before that, you'd have to go back to the Radeon HD 4650 and 4670 in 2008, arguably the first real budget gaming cards.  In both of those cases, AMD had a huge efficiency advantage over Nvidia to exploit.

    I see no reason why Nvidia would use GK106 and GK107 to start a price war.  Kepler is nice, yes, but it's not massively better than Southern Islands to the degree that Nvidia can undercut prices, AMD can't respond, and Nvidia can still make a fat profit.  Those are basically the circumstances that led to AMD starting a price war with their 4000 and 5000 series cards.  Once the rest of Kepler launches, AMD's efficiency advantage evaporates, and with it, their incentives to start a price war.

  • AmjocoAmjoco Member UncommonPosts: 4,860
    I still think Quizzical is the Rainman of computer hardware.  :D

    Death is nothing to us, since when we are, Death has not come, and when death has come, we are not.

  • BrenelaelBrenelael Member UncommonPosts: 3,821

    One thing Quizzical didn't mention is you can still get PhysX with an AMD(ATI) Card. What happens however is because ATI cards emulate PhysX through software instead of hardware like the Nvidia cards do it pushes the bulk of the processing for PhysX off onto the CPU. Depending on the game this can increase operating temps of your CPU and can even cause them to "throttle" hurting performance. Just an example... When EVE Online first introduced the "Captain's Quarters" expansion some people who had ATI cards actually had their CPU's fry because of the increased load due to the poor optimization of the PhysX code used.

     

    Edit: Just to clarify... I'm not for or against either type of card. I've used both throughout the years. In fact the computer I'm currently using has an ATI card in it and I have no complaints about it's performance. I just thought I'd throw in my 2 cents since I got the impression from some of the posters above that they took what Quizzical said as "ATI cards don't do PhysX".

     

    Bren

    while(horse==dead)
    {
    beat();
    }

  • QuizzicalQuizzical Member LegendaryPosts: 25,355

    PhysX is a physics API that was developed by Ageia, which was later bought by Nvidia.  The reason PhysX can run on Nvidia cards but not AMD cards is because that's how Nvidia wants it, and Nvidia owns the API.  When you're writing the source code, it's easy to make it run poorly or not at all on particular hardware if that's what you want.  The x86 code path is poorly optimized and the OpenCL code path doesn't exist--both because that's how Nvidia wants it.  The entire point of buying Ageia in the first place was so that Nvidia could have a feature bullet point of something that GeForce cards can do and Radeon cards can't.

    PhysX can run either on a processor or on a video card.  A lot of games use it with the intention of running it on a processor.  The API is capable of doing a lot, so if you want some relatively simple physics computations in a game, it's often a lot easier just to use PhysX or some other API to do most of the work for you, rather than coding everything from scratch.

    There are a lot of other graphical effects that are higher priority than way over the top physics computations, so if a game offers some very demanding physics effects, the only people who might enable them already have pretty capable gaming machines.  That usually includes several processor cores, so you can give PhysX a processor core to itself if needed.  There's no sense in trying to push it onto a video card unless what you have in mind is a lot more computationally intensive than a single processor core can handle.

    The trouble with running PhysX on a video card is that the video card is usually needed to render a game.  If, excluding PhysX effects, the video card would be maxed out and the processor would be half idle, then sticking physics computations on the video card is incredibly stupid.  You only do that if what you want is more than the processor could handle.  Physics computations can be very parallel, so they are the sort of thing that video cards can handle, if not otherwise occupied in running the game.

    You can have a card switch back and forth between rendering a game and doing PhysX computations.  Fermi cards weren't very good at this, as they could only do one thing at a time.  This meant that they could do some game computations for a while, then stop and set that aside to do some PhysX computations for a while, then stop and set that aside to return to doing more computations for other stuff in the game.  Context switching like that brings a huge performance hit.  Older Nvidia architectures were much worse at it, even.  I don't know if Kepler is capable of running two things simultaneously; even if the architecture can, GK104 itself may or may not be able to, let alone the lower end cards.  

    Doing GPU PhysX properly really means you need a dedicated video card for it.  What some people would do is, if they upgraded their video card and the old one was an Nvidia card, they'd keep the old card as a PhysX card.  Except that under Vista and XP, you can only have one video driver installed, so both cards would need to be Nvidia.  Which was really the point of Nvidia buying Ageia in the first place:  when you go to upgrade, if your new card is Nvidia, you can use GPU PhysX on the old card.

    Then Microsoft came along and said, hey, some people would like to have multiple video drivers installed at the same time?  Well then, we can offer that in Windows 7.  Now you can keep your old Nvidia card as a PhysX card, regardless of which vendor you buy the new card from.  (More importantly, this also enables discrete switchable graphics in laptops, as you can have one driver installed for the integrated graphics, and another for the discrete card.)

    So Nvidia said, hey, you're breaking our vendor lock-in!  The whole point of buying Ageia in the first place was to say that your new card has to be a GeForce or else you lose GPU PhysX!  So Nvidia changed their video drivers to disable PhysX on GeForce cards if it detects a Radeon card in the system.  People who have the ability to make a Radeon card identify itself as a GeForce have done so and seen that PhysX works just fine on the GeForce card even if the card rendering the game itself is a Radeon.  Disabling GPU PhysX when the system detects a Radeon card is nothing more than an artificial vendor lock-in, much like how Nvidia would disable SLI even if your hardware supports it if the motherboard manufacturer didn't pay Nvidia the "SLI tax".

    So the target market for GPU PhysX is people who have a high end gaming system that contains multiple GeForce video cards capable of running games well (the low end cards are too weak to be useful for GPU PhysX), but aren't running the video cards in SLI.  If that sounds like a rather small potential audience, you're right.

    So game developers said, fine then, we won't implement GPU PhysX.  If we use PhysX at all, it will run on the processor, not the video card.  That kills Nvidia's marketing point if there aren't any games that actually use GPU PhysX.  So what Nvidia does is that for a game or two per year, they'll pay some company to implement GPU PhysX in a game.  Game companies are generally fine with this.  If it costs $1 million to implement, and Nvidia pays that $1 million, sure, they'll do it.

    That's why there are occasionally significant games that have PhysX run on a GPU, but also why there aren't very many of them.  Nvidia can't pay every major game developer to use it in all of their games.  It probably gets officially classified as developer relations expenses, but it's really a marketing expense.

    For what it's worth, AMD has been pushing Bullet physics coded in OpenCL as the GPU physics alternative to PhysX.  OpenCL has the advantage that it will run on either GeForce or Radeon cards, while the developer only has to code it once.  The problem is that this only expands the target audience from those with two GeForce gaming cards not in SLI to two powerful gaming cards not in SLI or CrossFire, which is still a pretty small audience.  So game developers won't implement it unless someone else pays for it.  Nvidia won't; it would undercut their PhysX marketing.  AMD isn't willing to, either.  So no one does it.

    For what it's worth, AMD's Cayman GPU is able to run two things at once.  So you could, for example, have part of the card doing physics computations and part of the card rendering the game at exactly the same time.  This avoids the need for context switching, and the huge performance hit that brings.  I'd be surprised if Tahiti can't do the same, but I don't know about Pitcairn and Cape Verde--this could easily be one of the non-gaming bloat things that the top GPU chip gets and the rest of the line doesn't.  If this becomes common in future architectures, it could conceivably expand the target audience for GPU physics to people who have a high end video card, which is enough that some developer might bite.  So far as I know, that hasn't happened yet, though.

Sign In or Register to comment.