Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!

AMD HD 6970, etc.

swing848swing848 Member UncommonPosts: 292

It looks like the HD 6970 will be released the last week of November, if all goes well.

I do not like AMD's naming scheme, it will confuse many people who are not techno-geeks, or nearly so.  I am still waiting to see of the HD 6970 will perform well enough to pry my factory overclocked HD 4890 out of my hands.

The HD 6870 is looking good, however, if AMD had named it HD 6770 the numbers would look even more awesome, because it beats the GTX 470 in some games.

In any event the new video cards arrived last Friday, 22 Oct.  They run cool and AMD stated that a lower price will be available for the next three weeks, after that the price will go up, how much I do not know.

Just be sure you understand, the HD 6870 is not the replacement for the HD 5870.  The replacement for the HD 5870 is the HD 6970.

Intel Core i7 7700K, MB is Gigabyte Z270X-UD5
SSD x2, 4TB WD Black HHD, 32GB RAM, MSI GTX 980 Ti Lightning LE video card

Comments

  • QuizzicalQuizzical Member LegendaryPosts: 25,355

    AMD's naming scheme is a whole lot better than Nvidia's.  At least AMD doesn't give exactly the same name to totally different cards.  If AMD occasionally gives two different names to the same card, they at least have the rebrand end in a 5 or a V to flag it as such and only sell it to OEMs.

    Barts isn't really a replacement for either Juniper or Cypress.  Barts is slotted between them in performance, die size, power consumption, price tag and so forth.  Cayman is expected to be significantly above Cypress.  Turks will probably be somewhere between Redwood and Juniper.  It's not a clean, this card replaces that one.  If AMD could have given Barts a second digit of 7 1/2 and Cayman a second digit of 8 1/2, maybe they would have, but that obviously isn't practical.

    Here's my explanation of why they did it, copied from elsewhere:

    -----

    There is a better justification that isn't being offered, though one has to back to the start of AMD's modern naming system. The first generation of GPUs launched after AMD bought ATI had the Radeon HD 2400, 2600, and 2900 series parts. The 2900 had a huge GPU die of around 400 mm^2. It was also massively delayed, as AMD was unable to properly build GPUs that large.



    For the next generation, AMD had 3400, 3600, and 3800 series parts. The Radeon HD 3870 was basically a die shrink of the Radeon HD 2900 XT, and didn't really offer any better performance. If AMD had called it the 3970, people would have screamed about the naming system back then, too. It was a full node die shrink, with a die size of a mere 192 mm^2. That left room for much larger dies, and AMD wanted to leave the 9 digit available for that.



    In the next generation, AMD had the 4300/4500, 4600, and 4800 series. Then later, they added a 4700 series, with performance and pricing to match. The Radeon HD 4770 was basically a test part to see if AMD could make GPUs properly on TSMC's 40 nm bulk silicon process. The answer at the time was basically "no", but what AMD learned did help bring the next generation to market much sooner and more successfully.



    Anyway, the Radeon HD 4870, 4850, and 4830 were based on the RV770 GPU chip with a die size of 256 mm^2. That's larger than RV670, but AMD chose not to bump the last digit up to 9. AMD later launched a respin, with an RV790 chip that was 282 mm^2 and basically a higher clocked RV770. AMD could have called it a Radeon HD 4900 series, but then the top salvage part would have been slower than RV770. People would have complained if the Radeon HD 4950 were slower than the 4870, so AMD called the new cards the Radeon HD 4890 and 4860, respectively.



    In the next generation, AMD's top chip was Cypress, with a die size of 334 mm^2. This is still creeping upward, and it brought a big enough performance boost that AMD could have called it a Radeon HD 5900 series and no one would have flinched. The next GPU down, though, was Juniper, with a die size of a mere 180 mm^2, and it got branded the Radeon HD 5700 series. Had AMD called it the 5800 series, it would have had a Radeon HD 5870 slower than the Radeon HD 4870, and a lot slower than the Radeon HD 4890. That wouldn't have gone over well.



    However, AMD wanted to reserve the Radeon HD 5900 series for the dual GPU cards. The Radeon HD 3870 X2 had been two Radeon HD 3870s on a single card. The 4870 X2 had been two 4870s on a single card. Two 5870s on a single card would go over the 300 W cap of the PCI Express specification, however, so the Radeon HD 5970 is essentially two underclocked and undervolted 5870s on a single card. Calling it the Radeon HD 5870 X2 wouldn't have gone over well, either. Asus did release a custom card and market it as a Radeon HD 5870 X2, and it was essentially two 5870s on a single card, but went way over the 300 W cap of the PCI Express specification. Sapphire and XFX released similar cards, but branded as overclocked 5970s.



    That brings us to the current generation, where Barts is 255 mm^2, and being branded as the Radeon HD 6800 series. That's essentially the same die size as RV770 of the 4800 series, and markedly larger than RV670 of the 3800 series. Were it not for the 5800 series, no one would flinch at calling this the 6800 series. Cayman is going to be much larger yet, and calling a huge GPU die only a 6800 series basically says you can never use the 9 for a single GPU card again, the way that AMD/ATI had done in the past with the Radeon HD 2900 XT and Radeon X1900 and X1950 series cards, or else you'll have people complaining about an 8 series slower than the previous one in some future generation.



    So really, there isn't a good naming scheme for AMD to apply here. It's not great for the 6800 series to be slower than the 5800 series, but I think it's understandable why they did it. And with the length of this post, one can understand why AMD didn't offer this as their official justification.

    -----

    Another explanation is that the second digit basically denotes the die size.  That's pretty strongly correlated with performance in a given generation and on a given process node, as if you have twice the die size, you can have twice as much of everything and get twice the performance.  I've seen it claimed that, with only two exceptions in the last 7 years (going well back to the ATI era, before they were bought by AMD), 100-150 mm^2 was a second digit of 6, 151-190 mm^2 was a second digit of 7, 191-340 mm^2 was a second digit of 8, and 341+ mm^2 was a second digit of 9.  I haven't checked whether all of the die sizes fit that, but it seems about right.  With a die size of 255 mm^2, Barts falls right in the middle of the 8 range.  With a die size of 170 mm^2, Juniper was a 7.

    -----

    AMD didn't say that prices on AMD's cards would go up.  AMD said that prices on Nvidia's cards would go up.  AMD is basically claiming that Nvidia's price cuts on the GTX 460 and GTX 470 are only a temporary measure.  Surely Nvidia knows that if they raise prices to their previous levels, they're basically done selling those cards.  It's possible that Nvidia wants to get rid of existing inventory and discontinue the cards.  It's possible that Nvidia is preparing to release a respin or higher bin that would make the existing GTX 460 and GTX 470 obsolete.  And I guess it's also possible that AMD simply has bad information and is trying to make Nvidia look bad.

    Nvidia is in a very bad situation right now.  The GeForce GTX 470 gives about the same performance as a Radeon HD 6870, but takes more than double the die size and about 2/3 more power consumption to do it.  It likely costs more than twice as much to build a GTX 470 as it does to build a 6870, and if they sell for the same price at retail, that's a big, big problem for Nvidia.  The GTX 460, meanwhile, has its 1 GB version only roughly comparable in performance to a Radeon HD 6850.  The GTX 460 uses more power and has about 50% more die size than the 6850.  This is also comparing the top bin of GF104 to the bottom bin of Barts, so the 6850 dies should probably be counted at a discount rate.  Again, the GTX 460 is vastly more expensive to build than the 6850.  The GeForce GTS 450 compares to the Radeon HD 5750 in about the same way.

    That would be fine if Nvidia were merely a generation behind and about to relaunch an important new generation that would catch up.  But they aren't.  There are two traditional ways to get big improvements.  One is a new architecture, but that only happens about once every 2+ years, as they're expensive to develop.  Given that Nvidia already launched a new architecture earlier this year, only got the second GPU of the architecture out about three months ago, and the bottom GPU of the first generation out earlier this month, they don't have another new architecture coming soon.   Maybe Kepler can launch in about a year, but that doesn't help Nvidia for about a year.

    The other way to improve is to move to a new process node that lets you do the same thing as before with less die size and less power consumption.  The problem for Nvidia is that there isn't a new process node to move to yet.  TSMC's 32 nm process was canceled.  So was Global Foundries' 32 nm HKMG process.  (The SOI process is still going to come out, but that's for high clocked x86 CPUs, and not appropriate for video cards.)  TSMC and Global Foundries are coming out with 28 nm HKMG processes, but cards on that releasing next summer would be optimistic.  That would likely enable Nvidia to catch up to AMD's current cards, but AMD can move to the new process nodes, too, and get the same improvements--and will likely do so before Nvidia does.

    There are credible rumors that a GeForce GTX 580 is coming.  I say they're credible not least because it briefly showed up on Nvidia's web site.  It's unclear what exactly that will be.  It might merely be a respin of GF100.  It might apply the tweaks of GF104 in a GF100 die size, which would be a significant improvement.  It could be as simple as a rebrand of the GTX 480, though hopefully with a better reference cooler.

    -----

    AMD was trying to counter Nvidia's PR stunts.  This had two parts.  One was to slash prices on the GTX 460 and GTX 470 the day before AMD's new cards launched, so that reviewers would compare them to Nvidia's cards at the new prices, not the old prices.  AMD's new cards are still a better deal at the new prices, but it's not a complete slaughter like it would have been at the old prices.  Getting a given level of performance from the 6870 for $240 is a better deal than getting about the same level of performance from the GTX 470 for $260, but not by nearly so big of a margin as if the GTX 470 were still $300.

    Now, if Nvidia cuts prices on the GTX 460 and GTX 470 and keeps them down, that's certainly fair game.  I'm not sure how Nvidia can hope to make money at the new prices, but maybe yields have improved and they just haven't yet released new, higher bins.  AMD is saying that Nvidia's price cuts are temporary.  The easy way to find out who is right is to wait and see.

    The real problem was that Nvidia asked reviewers to include the EVGA GTX 460 FTW in the Radeon HD 6850 and 6870 reviews.  The EVGA GTX 460 FTW is basically a press edition card available only in very limited quantities.  In order to keep it in stock for very long, they'll have to hike prices on it greatly.

    Nvidia's basic argument is that the stock clocks on the GTX 460 aren't representative of what's actually out there to buy.  What Nvidia should have done about this is to release a new, higher bin of the GTX 460.  Call it a GTX 461 or some such.  Or maybe a GTX 560, given Nvidia's propensity to increment the first digit whenever they feel like it rather than tying them strictly to new generations of cards.  Make it a GTX 460 with a stock clock of 750 MHz or some such.  If they did that, then including the new card clocked at 750 MHz as a real competitor would be fair game.

    Nvidia's answer to this was to send reviewers a factory overclocked GTX 460 and say, here, put this in your review, as it's the real competition to the Radeon HD 6870.  But they didn't just pick any old factory overclocked GTX 460.  They picked the press edition version of the EVGA GTX 460 FTW.

    As of when I checked a couple of days ago, there were 43 GTX 460s in New Egg.  Many (perhaps most) of them had factory overclocks.  In fact, enough did that including a GTX 460 with an overclock of 750 MHz or so in reviews, clearly tagged as an overclock, would have been reasonable.  Nvidia's argument about factory overclocks has some merit, though Nvidia really should address it by releasing a new bin of the card (unless, per AMD's argument, they're about to discontinue it).

    But the EVGA GTX 460 FTW isn't a typical factory overclock.  It's the only GTX 460 with a factory overclock to above 815 MHz.  And it's not just barely the biggest; it's clocked at a whopping 850 MHz.  There are eight GTX 460s on New Egg clocked at 800-815 MHz.  That includes cards from MSI, Gigabyte, Galaxy, Palit, and Zotac, so they all settled on that as about the highest factory overclock that could be done on a "real" card, as opposed to press edition quantities.

    What Nvidia probably did was to bin out the GF104 chips that could clock the highest, and sell them all to EVGA for a special card to be around briefly for the Barts launch before selling out and disappearing.  Nvidia would essentially write off the cost of doing this as a marketing expense.

    That way, Nvidia could try to con reviewers into including it in the 6850 and 6870 reviews.  They could hope that someone who comes along months later looking for a good card and reading reviews would either not notice that they're comparing a stock Radeon HD 6870 to a factory overclocked GeForce GTX 460, or at least think it's a typical factory overclock, not something clocked massively above any other factory overclocked cards.  If Nvidia can convince a lot of casual readers months from now that the bars on the graphs showing a GTX 460 clocked at 850 MHz are a typical factory overclock, and that the factory overclock that they've actually found at 750 MHz should be comparable, then they've accomplished their goal.  If they can convince such readers that the 750 MHz factory overclock should be faster than the EVGA GTX 460 FTW, because they don't realize that that card is a factory overclock, so much the better.

    This gives an easy way to compare video card review sites, by seeing what they did with the EVGA GTX 460 FTW.  The reputable sites either ignored it, only talked about it when comparing to other overclocked cards, or underclocked it before using it in the review.  (A lot of the cards that get used in reviews are factory overclocked cards, because that's what the site happens to have available, but they set the clock speeds to the stock speeds before running any benchmarks.)

  • QuizzicalQuizzical Member LegendaryPosts: 25,355

    Also, about the EVGA GTX 460 FTW being a press edition card, availability is very limited.  We're only three days past Nvidia needing them for their PR stunt, and there aren't many of the cards available.  New Egg has two SKUs of them, one of which is sold out, and the other of which has only 27 in stock.  Tiger Direct is sold out.  Amazon is sold out.  Google finds a couple other sites that have a few, but for well above MSRP, making them not such a good deal.

  • noquarternoquarter Member Posts: 1,170

    The new naming scheme is a little confusing at the moment but in a few months it will settle itself. It's confusing because you think of the 6870 as the replacement to the 5870 but it's not, it's the replacement for the 5770. Once the 6970 is out it should make more sense since there will be a whole line of cards for 6xxx at that point.


    Where is your wall of text from Quizzical? Also how are you able to check Newegg's stock?

  • Loke666Loke666 Member EpicPosts: 21,441

    Originally posted by Quizzical

    AMD's naming scheme is a whole lot better than Nvidia's. 

    Not by much... Both really sucks, I miss 3DFX. Just a number is not good enough, people mix them together.

    I remember my old "Evil Queen of the Banshees" and Vodoo cards, that is the right stuff. :D

    It is all Matrox fault, they started out fine as well with Millenium but the Matrox 200 card started the trend. 

    Anyways, I think I stand over this generation of cards, it seems like it cost more than it tastes even if it is an upgrade. I wanted Dx 11 so that is why I got my current card, it will have to do for at least a year.

  • Loke666Loke666 Member EpicPosts: 21,441

    Originally posted by Quizzical

    Also, about the EVGA GTX 460 FTW being a press edition card, availability is very limited.  We're only three days past Nvidia needing them for their PR stunt, and there aren't many of the cards available.  New Egg has two SKUs of them, one of which is sold out, and the other of which has only 27 in stock.  Tiger Direct is sold out.  Amazon is sold out.  Google finds a couple other sites that have a few, but for well above MSRP, making them not such a good deal.

    Heh, ATI have done the first thing earlier too, releasing a card in a few numbers to get them out fast. Toms had a great article about that a while ago.

  • BarbarbarBarbarbar Member UncommonPosts: 271

    As far as I understand, the short version of the naming scheme is that: ATI aren't done selling the 5700 series yet (Mainly 5770), and if they launched a 6700-series the 5700 series would be dead as a Dodo.

  • swing848swing848 Member UncommonPosts: 292

    Quizzical's comments left me wondering a little where his left off and someone else's started, and ended.  The post generated a lot of words though.

    One of my favorite sites is www.hardwarecanucks.com because they do a good, though sometimes limited, review.  But, they often throw in some of the funniest comments.  They also have a YouTube review where they quote AMD making the promise that the 6000 series cards will be twice as fast as the 5000 series.  That caught my attention.  However, upon listening again I believe he was only talking about tessellation performance  http://www.youtube.com/watch?v=IKZFalqtmzs&feature=fvwk

    Seeing as how this is the first day [excluding the weekend] after the launch there are probably 2 billion people reading about the new AMD card, and may be a couple gazillion [Forest Gump] aliens, so, YouTube may suffer.

    Although I read a lot of sites http://www.guru3d.com/ is another of my favorites, again partly due to their sometimes funny comments, but they also do a good job of reviewing.  Oh, and did I mention they already have them crossfired?

    Intel Core i7 7700K, MB is Gigabyte Z270X-UD5
    SSD x2, 4TB WD Black HHD, 32GB RAM, MSI GTX 980 Ti Lightning LE video card

  • QuizzicalQuizzical Member LegendaryPosts: 25,355

    Originally posted by noquarter

    Where is your wall of text from Quizzical? Also how are you able to check Newegg's stock?

    The part between the first two ----- is copied from something I wrote on another forum:

    http://forums.champions-online.com/showthread.php?t=102211&page=8

    I'm Quaternion there, as the name Quizzical was already taken.

    You can check New Egg's stock by trying to put all of it in your cart.  It will cap your order at what they actually have in stock.  That won't work with parts where they limit how many you can order, but it works for ones where they don't.  It works with some other sites, too.  Obviously, you then take the items back out of your cart rather than buying them.  In this case, go here:

    http://www.newegg.com/Product/Product.aspx?Item=N82E16814130581

    Try to put 99 of them in your cart.  At the moment, it has 26 actually appear in your cart.

  • QuizzicalQuizzical Member LegendaryPosts: 25,355

    Originally posted by Loke666

    Heh, ATI have done the first thing earlier too, releasing a card in a few numbers to get them out fast. Toms had a great article about that a while ago.

    This isn't a paper launch with few available at first and more to come later.  This is probably few available at first and few later, too.  One can argue that AMD did something kind of similar with the Radeon HD 4770, but by the time yields were high enough to make them in volume, no one would have cared, as people would have wanted a Radeon HD 5000 series card instead.  That wasn't a PR stunt; that was an effort at testing TSMC's new (well, it was very new at the time) 40 nm bulk silicon process node.

    Something more similar to what you're saying is ATI's X800 XTX-PE.  At the time, both ATI and Nvidia put a large emphasis on who had the fastest card.  ATI and Nvidia had their top GPUs essentially tied in performance, but went back and forth releasing faster bins of it.  It ended with the X800 XTX-PE, which was the fastest card on the market at the time, but with only a couple hundred or so available worldwide, a large fraction of which went to media sites, so it was essentially a fake card just to claim that they had the fastest card on the market.

    But both of those are different.  The former is justified, and the latter is a PR stunt, but it's a different PR stunt with a different purpose.  ATI and AMD gave the low volume cards different names.  Someone looking for a Radeon HD 4770 and unable to find one wouldn't pick up a 4670 or a 4870 thinking it was the same thing.  (And if he did get a 4870, that's a faster card.)  Nvidia didn't give the GTX 460 a new card name for the EVGA's special version.  They just called it a GTX 460, like dozens of others on the market.

    In fact, Nvidia is rather fond of giving the same name to different cards.  This generation alone, there are three different cards called the GeForce GTX 460, not counting factory overclocks:

    http://www.nvidia.com/object/product-geforce-gtx-460-us.html

    http://www.nvidia.com/object/product-geforce-gtx-460-oem-us.html

    There are two different GTS 450s, as well, with one offering probably less than 80% of the performance of the other:

    http://www.nvidia.com/object/product-geforce-gts-450-us.html

    http://www.nvidia.com/object/product-geforce-gts-450-oem-us.html

    Someone who buys a prebuilt machine thinking he's getting the GTS 450 that he saw in reviews likely isn't getting it.

    "Quizzical's comments left me wondering a little where his left off and someone else's started, and ended. The post generated a lot of words though."

    The comments were entirely my own.  Part was copied and pasted from what I wrote elsewhere, but all of it was my own.

    "However, upon listening again I believe he was only talking about tessellation performance"

    Basically, Radeon HD 5000 series cards are fast enough at tessellation that if you want to game at 2560x1600 and get 60 frames per second and want to tessellate to the point that every pixel has its own triangle, AMD's hardware tessellator will not be a bottleneck.

    You can, of course, make a synthetic benchmark that tessellates a whole lot more than that.  Unigine Heaven does exactly that in a couple of parts.  That makes tessellation into a bottleneck, and GF100-based cards handle that better than anything AMD has, so it's a synthetic benchmark that Nvidia has been pushing.  That has nothing to do with performance in real games, though, unless they're really badly coded.  AMD says that the optimal amount of tesselation is about 16 pixels per triangle, which is enough that you can't see the difference on adding more, other than the lower frame rates.

    AMD and Nvidia have very different hardware approaches to tessellation.  Nvidia does it in their "polymorph engine", one of which is attached to each "streaming multiprocessor".  GF100 has twice as many of these as GF104, and hence twice the theoretical tessellation performance.  GF104 still has enough that the difference doesn't matter except for synthetic benchmarks.  It does mean that lower end Nvidia cards have less tessellation power, which might be a problem on GF108-based cards.  Then again, GF108 isn't much of a gaming chip.

    AMD, on the other hand, has one fixed function tessellator on each GPU chip.  The high end Cypress and the low end Cedar have exactly the same tessellation power.  Well, I guess Cypress would tend to be clocked higher, so the tessellator would be clocked higher, giving it a little more performance.  But it's not a big difference.  AMD revamped the hardware tessellator in the 6000 series for better performance, but the only place that will ever help is in synthetic benchmarks and badly coded games that tessellate far beyond what is reasonable.

    What happened is that some Nvidia fanboys picked this up and said, get a Fermi not an Evergreen, because it has superior tessellation performance.  It's kind of like saying to buy a card that can do 32x MSAA rather than one that can "only" do 8x MSAA.  The former doesn't make the image quality better in practice; it only bogs down the card performance more.  Extremely high degrees of anti-aliasing or tessellation are justified if it's a render once not in real time thing, like the movies Pixar makes.  But not for gaming that has to be rendered in real-time.

  • QuizzicalQuizzical Member LegendaryPosts: 25,355

    There was one other trick that Nvidia tried that I'll mention.  Apparenly HAWX 2 isn't out yet, but has a benchmark.  Nvidia has worked with (and paid) Ubisoft to make sure that the benchmark would run well on Nvidia cards and would not run well on AMD cards, or at least not until AMD changes some stuff in their drivers to accommodate it.  So Nvidia told sites that they should use the HAWX 2 benchmark as one of their "games" to benchmark cards on, even though the game won't be out for weeks, and performance on the synthetic benchmark today likely isn't indicative of performance when the game actually launches, let alone of other real games.

    Thankfully, most review sites ignored this flagrant trickery, or at least didn't use the HAWX 2 benchmark in their review.  (Some did use HAWX, which is a real game that has been out for a while, so that's fine.)

  • AmazingAveryAmazingAvery Age of Conan AdvocateMember UncommonPosts: 7,188

    Waiting to see what the new AMD dual high end card will be, probably another paper-launch in Canada like the 5970 and probably won't be available for at least 5 months at a normal retail store or newegg.ca again....

    AMD put the pressure on but didn't have any stock when I wanted to buy. Shame for them I ended up with a GTX 480 on day one. I could pass this card along to the wifes box and pick up the next high end no probs, but only if it is stock...

    I think the fact that the 6870 is slower than the 5870 was done on purpose. (naming scheme by design)



  • QuizzicalQuizzical Member LegendaryPosts: 25,355

    Originally posted by AmazingAvery

    Waiting to see what the new AMD dual high end card will be, probably another paper-launch in Canada like the 5970 and probably won't be available for at least 5 months at a normal retail store or newegg.ca again....

    There were two reasons for that.  First, AMD couldn't get nearly as many wafers from TSMC as they wanted, so they couldn't make as many cards as they wanted and had to prioritize.  They decided to skimp on Cypress wafers, as the smaller die sizes mean you get more cards from a single wafer, and hence more market share.  Among the Cypress wafers they got, they decided to give short shrift to 5970s, rather than the cards that had more of a point to them.

    Second, the main point of the 5970 was so that AMD could claim that they had the fastest card on the market.  After the 5870 launched, Nvidia fans insisted that a GTX 295 was still faster, even though it was comparing two GPUs to one.  The 5970 let AMD claim that the 5970 was faster than a GTX 295, so they could claim the "halo" part, even if both cards were nearly pointless.  That still let AMD claim to have the fastest card after the GTX 480 launched, even if, again, it was comparing two GPUs against one.  Of course, I wouldn't have much interest in reference versions of either of either of those cards, as neither are properly cooled.  The reference 5970 manages to cool the GPUs adequately, but not the VRMs.  The GTX 480 cooling system can't handle the GPU well enough.

    Third, the 5970 had a very, very narrow market.  Two 5870s in CrossFire beat one 5970, and it's a lot easier to cool them with two cards than one.  The only real advantage of the dual GPU cards is being able to go for quad CrossFireX or quad SLI.  The market for that sort of super high end rigs is extremely small, though.

    As for being unable to find a Radeon HD 5970, they were available well before the GTX 480, and widely available when the GTX 480 paper launched.  Around a year ago, though, you made it clear that you were going to wait for Fermi to launch before buying anything, regardless of whether the AMD cards were available earlier or not.

    -----

    We should see a hard launch from Cayman, as AMD can simply divert production from older chips, and TSMC isn't as short on capacity as before.  As for Antilles, the dual GPU card, that probably depends some on whether AMD can get enough wafers.  The top Cayman card should beat a GTX 480 handily, and AMD already has the top dual GPU card, so there's no need to paper launch a new halo part.  If they have enough GPU chips, maybe they launch Antilles by the end of the year.  If not, then it makes more sense to allocate chips to the single GPU cards and claim more market share.

    It probably doesn't help that AMD is diverting a bunch of 40 nm wafers to Bobcat, as they're planning on selling millions of the APUs, and doing it on TSMC's 40 nm process rather than a traditional CPU process.  Using that process node makes sense, as that makes it mostly an easy cut and paste of Cedar to get the GPU part of the chip.  Bobcat is a new architecture, so the CPU part wasn't previously designed for any particular process, and they could design it for whatever they want.

    AMD's plan was that their high end video cards would be off of the 40 nm node in favor of 32 nm by the time Bobcat launched, so there would be plenty of capacity on 40 nm.  The basic idea is to make cheap chips on old process nodes, making them cheaper to manufacture, like AMD and Intel commonly do with chipsets and integrated graphics.  Then, of course, TSMC canceled their 32 nm process node, leaving 40 nm as still the cutting edge for high end video cards.

    And then there's also the question of whether Antilles will have much of a point.  I'd expect Cayman to have higher power consumption than Cypress.  If a single GPU card takes 200 W, putting two of them on a card and fitting it in a 300 W cap means you have to really underclock and undervolt things, to the degree that the card is nearly pointless.  And if it takes 250 W for a single GPU, then the dual GPU card really is pointless, unless you're willing to break the 300 W cap.

  • AmazingAveryAmazingAvery Age of Conan AdvocateMember UncommonPosts: 7,188

    Originally posted by Quizzical

    Originally posted by AmazingAvery

    Waiting to see what the new AMD dual high end card will be, probably another paper-launch in Canada like the 5970 and probably won't be available for at least 5 months at a normal retail store or newegg.ca again....

    There were two reasons for that.  First, AMD couldn't get nearly as many wafers from TSMC as they wanted, so they couldn't make as many cards as they wanted and had to prioritize.  They decided to skimp on Cypress wafers, as the smaller die sizes mean you get more cards from a single wafer, and hence more market share.  Among the Cypress wafers they got, they decided to give short shrift to 5970s, rather than the cards that had more of a point to them.

    Second, the main point of the 5970 was so that AMD could claim that they had the fastest card on the market.  After the 5870 launched, Nvidia fans insisted that a GTX 295 was still faster, even though it was comparing two GPUs to one.  The 5970 let AMD claim that the 5970 was faster than a GTX 295, so they could claim the "halo" part, even if both cards were nearly pointless.  That still let AMD claim to have the fastest card after the GTX 480 launched, even if, again, it was comparing two GPUs against one.  Of course, I wouldn't have much interest in reference versions of either of either of those cards, as neither are properly cooled.  The reference 5970 manages to cool the GPUs adequately, but not the VRMs.  The GTX 480 cooling system can't handle the GPU well enough.

    Third, the 5970 had a very, very narrow market.  Two 5870s in CrossFire beat one 5970, and it's a lot easier to cool them with two cards than one.  The only real advantage of the dual GPU cards is being able to go for quad CrossFireX or quad SLI.  The market for that sort of super high end rigs is extremely small, though.

    As for being unable to find a Radeon HD 5970, they were available well before the GTX 480, and widely available when the GTX 480 paper launched.  Around a year ago, though, you made it clear that you were going to wait for Fermi to launch before buying anything, regardless of whether the AMD cards were available earlier or not.

    -----

    We should see a hard launch from Cayman, as AMD can simply divert production from older chips, and TSMC isn't as short on capacity as before.  As for Antilles, the dual GPU card, that probably depends some on whether AMD can get enough wafers.  The top Cayman card should beat a GTX 480 handily, and AMD already has the top dual GPU card, so there's no need to paper launch a new halo part.  If they have enough GPU chips, maybe they launch Antilles by the end of the year.  If not, then it makes more sense to allocate chips to the single GPU cards and claim more market share.

    It probably doesn't help that AMD is diverting a bunch of 40 nm wafers to Bobcat, as they're planning on selling millions of the APUs, and doing it on TSMC's 40 nm process rather than a traditional CPU process.  Using that process node makes sense, as that makes it mostly an easy cut and paste of Cedar to get the GPU part of the chip.  Bobcat is a new architecture, so the CPU part wasn't previously designed for any particular process, and they could design it for whatever they want.

    AMD's plan was that their high end video cards would be off of the 40 nm node in favor of 32 nm by the time Bobcat launched, so there would be plenty of capacity on 40 nm.  The basic idea is to make cheap chips on old process nodes, making them cheaper to manufacture, like AMD and Intel commonly do with chipsets and integrated graphics.  Then, of course, TSMC canceled their 32 nm process node, leaving 40 nm as still the cutting edge for high end video cards.

    And then there's also the question of whether Antilles will have much of a point.  I'd expect Cayman to have higher power consumption than Cypress.  If a single GPU card takes 200 W, putting two of them on a card and fitting it in a 300 W cap means you have to really underclock and undervolt things, to the degree that the card is nearly pointless.  And if it takes 250 W for a single GPU, then the dual GPU card really is pointless, unless you're willing to break the 300 W cap.

     Was well documented that in Canada a 5970 was not available when the time came to purchase from fall the years before all the way up to the GTX 480 launch http://www.mmorpg.com/discussion2.cfm/thread/273862/page/1

    No offence but there was another thread somewhere where I talked about desire for a 5970 as well as pricing. In addition I have no cooling issues with my 480 same with noise and it is heavily OC.

    newegg.ca was OOS of 5970's for nearly 5 months at the time! I expect nothing less this time around.



  • CatamountCatamount Member Posts: 773

    Originally posted by AmazingAvery

    Originally posted by Quizzical


    Originally posted by AmazingAvery

    Waiting to see what the new AMD dual high end card will be, probably another paper-launch in Canada like the 5970 and probably won't be available for at least 5 months at a normal retail store or newegg.ca again....

    There were two reasons for that.  First, AMD couldn't get nearly as many wafers from TSMC as they wanted, so they couldn't make as many cards as they wanted and had to prioritize.  They decided to skimp on Cypress wafers, as the smaller die sizes mean you get more cards from a single wafer, and hence more market share.  Among the Cypress wafers they got, they decided to give short shrift to 5970s, rather than the cards that had more of a point to them.

    Second, the main point of the 5970 was so that AMD could claim that they had the fastest card on the market.  After the 5870 launched, Nvidia fans insisted that a GTX 295 was still faster, even though it was comparing two GPUs to one.  The 5970 let AMD claim that the 5970 was faster than a GTX 295, so they could claim the "halo" part, even if both cards were nearly pointless.  That still let AMD claim to have the fastest card after the GTX 480 launched, even if, again, it was comparing two GPUs against one.  Of course, I wouldn't have much interest in reference versions of either of either of those cards, as neither are properly cooled.  The reference 5970 manages to cool the GPUs adequately, but not the VRMs.  The GTX 480 cooling system can't handle the GPU well enough.

    Third, the 5970 had a very, very narrow market.  Two 5870s in CrossFire beat one 5970, and it's a lot easier to cool them with two cards than one.  The only real advantage of the dual GPU cards is being able to go for quad CrossFireX or quad SLI.  The market for that sort of super high end rigs is extremely small, though.

    As for being unable to find a Radeon HD 5970, they were available well before the GTX 480, and widely available when the GTX 480 paper launched.  Around a year ago, though, you made it clear that you were going to wait for Fermi to launch before buying anything, regardless of whether the AMD cards were available earlier or not.

    -----

    We should see a hard launch from Cayman, as AMD can simply divert production from older chips, and TSMC isn't as short on capacity as before.  As for Antilles, the dual GPU card, that probably depends some on whether AMD can get enough wafers.  The top Cayman card should beat a GTX 480 handily, and AMD already has the top dual GPU card, so there's no need to paper launch a new halo part.  If they have enough GPU chips, maybe they launch Antilles by the end of the year.  If not, then it makes more sense to allocate chips to the single GPU cards and claim more market share.

    It probably doesn't help that AMD is diverting a bunch of 40 nm wafers to Bobcat, as they're planning on selling millions of the APUs, and doing it on TSMC's 40 nm process rather than a traditional CPU process.  Using that process node makes sense, as that makes it mostly an easy cut and paste of Cedar to get the GPU part of the chip.  Bobcat is a new architecture, so the CPU part wasn't previously designed for any particular process, and they could design it for whatever they want.

    AMD's plan was that their high end video cards would be off of the 40 nm node in favor of 32 nm by the time Bobcat launched, so there would be plenty of capacity on 40 nm.  The basic idea is to make cheap chips on old process nodes, making them cheaper to manufacture, like AMD and Intel commonly do with chipsets and integrated graphics.  Then, of course, TSMC canceled their 32 nm process node, leaving 40 nm as still the cutting edge for high end video cards.

    And then there's also the question of whether Antilles will have much of a point.  I'd expect Cayman to have higher power consumption than Cypress.  If a single GPU card takes 200 W, putting two of them on a card and fitting it in a 300 W cap means you have to really underclock and undervolt things, to the degree that the card is nearly pointless.  And if it takes 250 W for a single GPU, then the dual GPU card really is pointless, unless you're willing to break the 300 W cap.

     Was well documented that in Canada a 5970 was not available when the time came to purchase from fall the years before all the way up to the GTX 480 launch http://www.mmorpg.com/discussion2.cfm/thread/273862/page/1

    No offence but there was another thread somewhere where I talked about desire for a 5970 as well as pricing. In addition I have no cooling issues with my 480 same with noise and it is heavily OC.

    newegg.ca was OOS of 5970's for nearly 5 months at the time! I expect nothing less this time around.

    Yes, but what 480 do you have? Quizzical is only talking about the ones cooled with the reference cooler. If the reference cooler is what you have, the odds are you DO have cooling problems, problems that will greatly affect ultimate longevity if not immediate stability (unless it really is so cold in Canada that your parts have 5C abient temperatures to work with all the time anyways image).

Sign In or Register to comment.