Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!

Nvidia is about to pwn j00

2

Comments

  • Loke666Loke666 Member EpicPosts: 21,441

    Sounds good but until anyone actually compares ATI and NVIDAS nextgen card we can't really tell which one that is best.

    It is kinda like the old "Will TOR, Rift or GW2 be the best game" discussions.

    Great they both make new cards, I will consider if I will update from my 480 card to any of them and which one in that case when I actually see some benchmarks made by Toms or a similar independent place.

    Otherwise will I play cool and jump over this generation. Since I already have Dx 11 the question is if it is even worth the money to upgrade.

    But if you have a old crap card it is good that new cards will hit the market, or if you plan to build a new computer for all the great games coming.

  • BenthonBenthon Member Posts: 2,069

    The pages are currently down, probably to massive traffic, but if you can get them up...

     

    http://www.techpowerup.com/reviews/NVIDIA/GeForce_GTX_580/1.html

     

    http://www.techpowerup.com/reviews/NVIDIA/GeForce_GTX_580_SLI/1.html

     

    Quick summary (not mine, was a comment from where I pulled the links): "It runs at half the volume and draws a little less power then the gtx 480 whilst performing on par with the 5970."





     

    He who keeps his cool best wins.

  • AmazingAveryAmazingAvery Age of Conan AdvocateMember UncommonPosts: 7,188
    NDA is over in 6hrs 45mins



  • QuizzicalQuizzical Member LegendaryPosts: 25,353

    "It is kinda like the old "Will TOR, Rift or GW2 be the best game" discussions."

    That's easy.  GW2 will be the best of the three games.  :D

    Oh, wait...

    "Otherwise will I play cool and jump over this generation."

    Yeah, no real sense in upgrading from a GTX 480.  The real jump in performance comes with the next generation, and the transition to 28 nm HKMG process nodes.  Nvidia will have a new architecture then, too, though Kepler's performance is unknown.  Nvidia hasn't offered any real performance improvements (as opposed to new features) from architecture changes (as opposed to the additional transistors available from a die shrink) since 2006, though, so they're rather overdue for some big architecture improvements.  In contrast, AMD got a big jump from RV770 in 2008; we don't yet know what Cayman offers, but rumors say that Barts may well be closer in architecture to Evergreen cards than to Cayman.

  • BenthonBenthon Member Posts: 2,069

    Originally posted by Quizzical

    "It is kinda like the old "Will TOR, Rift or GW2 be the best game" discussions."

    That's easy.  GW2 will be the best of the three games.  :D

     What have you done!?

    He who keeps his cool best wins.

  • QuizzicalQuizzical Member LegendaryPosts: 25,353

    Upon further review, if one goes by transistor counts, Nvidia is actually regressing.  A Radeon HD 5670 greatly outperforms a Radeon HD 2900 XT, in spite of having significantly fewer transistors.  The big architecture jump there was from the Radeon HD 3000 generation to the 4000 generation, and the 5000 generation actually took a step back there, though Barts seems to have made up the lost ground.

    On the Nvidia side, a GeForce GT 240 gives markedly worse performance than a GeForce 8800 GTX, in spite of having more transistors.  That's going in the wrong direction.  That does mean more room to make up ground for Kepler to potentially be a huge jump in performance, though.

  • ShinamiShinami Member UncommonPosts: 825

     

    The test on Civilization V was ran with Fraps recording the game for five minutes. They claimed they tested every aspect of gameplay in five minutes, which I can attest after playing the game for over 90 hours, its impossible to do. That whole test is unrealistic, impractical and even unprofessional. Unless of course you truly believe in the real world the majority play under such conditions.

     

    The argument about warranties is not about siding with Nvidia or ATI. Its about siding with a company that can give you something more for the product you pay for. An assurance. It used to be in the 60s through regulation and deregulation that government pushed companies into applying warranties on their products. Unfortunately, many companies in the tech industry do not uphold their claims, forcing consumers to buy warranties through the store they purchased a product from. I appreciate eVGA and any other company out there capable of enforcing and giving a 90 day step-up plan to protect purchase and a limited lifetime warranty on any top of the line card. If you want to view the world in Green or Red, or Blue or Red rather than supporting the notion that at the end of the day companies only care about profits and consumers have to protect their investments, then be my guest. You are free to your opinion!

     

    MLAA is a good attempt. Want to talk about blurry images, should I take you back to the nightmare that AA and AF was when it was FIRST RELEASED? and how OpenGL had better performance with AF at first? MLAA is a new technology and its intention is quite clear and I support it because it has a lot of potential. 

     

    Now that we are talking about technology, need I remind you that the whole is greater than the sum of its parts? You are absolutely right about mm^2 when it comes to gaming, specially since ATI cards are built on a minimalist level for gaming, and flop practically doing everything else. This is why ATI built itself on a gaming market, lost in the server and development market since gee I don't know...ATI can't make a decent Linux/Unix driver and ended their career by being bought by AMD.

     

    Or did it slip your mind that Nvidia Cards have Physics Processing incorporated into its cards with CUDA integration?Claiming that a smaller mm^2 die space is better in this case, though lacking certain features is like going out on a date and finding your partner only has it where it counts in one area and fails everywhere else. 

     

    Are you a Xenophobe? A minority of the world's population play 3D computer games. More people are on consoles playing games and handhelds along with smartphones are the latest tech trends. Even with those trends, the majority of the world plays simple games. Have you any idea how many people play Sudoku or try to solve chess puzzles or do crossword puzzles? The only time you find other 3D gamers in the real world is when you start a gaming club on a college campus and have an integration of console and PC gamers. A lot of people love to play yahooGames, AoLGames..Neopets, etc...

     

    In the real world I've encountered more artists and engineers who use nvidia cards. Macs used to be all ATI cards. Now 2/3rds their lineup are Nvidia. Linux Systems have great Nvidia drivers, while ATI can't even make a decent Linux Driver that won't force linux users to cross their fingers each time they open a program through Wine or Cedega. The server world and development knows nvidia outperforms ATI. 

     

    Of course we live in a society where people always want a lower price for everything. This means companies cut corners to give people the products they want. We live in a Quantity over Quality nation. Europeans, Asians and South Americans are about Quality over Quantity. They preserve and want their things to last. You could win in an "ad populum" argument as the majority will say "i want the lower price" but the majority isn't very smart either and the uneducated masses are everywhere.....

     

    What I do for gaming is this..

     

    I use a 64 bit OS with 8GB of RAM. I run a 480 GTX SLI on a Linux + Wine system. I take the game configuration files and change the settings and then I have a shell script for each game that loads the files I want into ramdisk. I have a lot of customization on the OS itself that helps. Its true that Linux gives me a lower framerate than Windows, but I also get lower ping. Around 10 - 25% lower ping in practically every single game Ive played comparing both...and with my hardware I break 60 at every game I have that I care about and it plays smoothly. ^_^ (I will say in FAIRNESS that Linux does not support or run any Nprotect/GameGuard Game)

     

    Now I could go run a 6XXX crossfire and I will watch my games not even start on Linux + Wine..

     

    As far as I see.....ATI is great if you want to go and retain being dependent on Windows...which means you will also run AntiViruses, Spyware Removal Tools, Adware Tools. The dumber of the lot will run software firewalls..and thanks to the way the OS is setup, your performance also goes downhill, specially when you agree to play Nprotect Games and deal with Kernel Level Rootkits. Not to mention having drivers (like adobe flash player) have a higher access level than even you do. What about all the SECURITY in programs that slow things down to a crawl..So yeah, you can go buy mighty hex cores and have most of those cores being used to retain all the crap you have to keep on to feel safe...even after configuration because once you take a hit to your registry or IE in any way, your OS is toast.....

     

    Lets not forget all the monitoring for trojans and all the problems that exists that each new update is to plug some hole in IE or some other part of the OS itself....I mean you have enemies at all sides. The BLOATWARE OF THE OS. THE BLOATWARE OF THE SOFTWARE + SECURITY BLOATWARE..and then everything incoming to your system.

     

    Mind you, I know how to put a stop to that, but there are things even under the highest level permission you set under windows (even windows 7), you have NO REAL CONTROL OVER....

     

    Nvidia gave me the freedom to play all the games I love, at 60 FPS+ and send microsoft to hell. 99% of what I do now is on Linux...and I play games from 2009 and 2010, and I make it all work. 

     

    The best part to all of this? No more paying money for software that will get bloated up and software that performs better than the Windows or MAC equivalent running under 1 core efficiently vs undergoing 2 - 3 cores in Windows because some anal company has "Security" to "protect its profits by making sure I am not a software pirate each time I run something" and thanks to how I have linux setup, I can go right through all the DRM BS and go through all the encryption BS on DVDs and Blu-Ray too. 

     

    [mod edit]

  • BenthonBenthon Member Posts: 2,069

    The longer you typed the angrier you got. Seems like you have a hate-crusade against Microsoft and Apple. I enjoy all my games at 60+FPS with all the settings on max too. With Windows, with all my bloatware, with my useless anti-spyware and everything that encompasses my operating system. And you're on Linux with the same result. Congratulations.

    He who keeps his cool best wins.

  • eyeswideopeneyeswideopen Member Posts: 2,414

    My main reasons for always going with Nvidia over ATI are that for one, no matter who you buy your Nvidia card from, Nvidia will support it; while ATI will tell you "we do not manufacture or support our cards, you must deal with whoever manufactured them". Another is I have never had an issue with an Nvidia cards drivers other than the beta ones on rare occasion, while ATI drivers are usually like trying to run Linux instead of Windows, you have to fuck around with them or find modded drivers to get them to work.

    After years of Nvidia, I took a chance on an ATI, and thoroughly regretted it. Went back to Nvidia and that's where I'll stay.

    That's not to say ATI doesn't make some good cards or that noone has ever had trouble with Nvidia cards. But my personal experience coupled with ATI horror stories I've seen in various game and technical forums just convinces me to consider ATI a no-go.

    -Letting Derek Smart work on your game is like letting Osama bin Laden work in the White House. Something will burn.-
    -And on the 8th day, man created God.-

  • QuizzicalQuizzical Member LegendaryPosts: 25,353

    Originally posted by Shinami

    The test on Civilization V was ran with Fraps recording the game for five minutes. They claimed they tested every aspect of gameplay in five minutes, which I can attest after playing the game for over 90 hours, its impossible to do. That whole test is unrealistic, impractical and even unprofessional. Unless of course you truly believe in the real world the majority play under such conditions.

    So basically, your argument is that what actually happens when a card is used in real games under real world circumstances, doesn't matter.  All people should care about is canned benchmarks.  And anyone who disagrees is "ignorant"?

    What do you buy gaming video cards for, if not to play games on?

    "The argument about warranties is not about siding with Nvidia or ATI. Its about siding with a company that can give you something more for the product you pay for. An assurance."

    Because warranties aren't serviced directly by Nvidia or AMD.  Is EVGA's warranty service really so much better than XFX's?  Did switching to AMD magically make XFX's warranty service vanish?  You're the one that brought this up.

    Now sure, I could understand citing warranty service as a reason to buy a GTX 580 from EVGA rather than from Sparkle.  But that wasn't the argument you made.

    "This is why ATI built itself on a gaming market, lost in the server and development market"

    Because there's a fortune to be made selling high end gaming video cards to servers that won't run games?  Right.

    "ATI can't make a decent Linux/Unix driver and ended their career by being bought by AMD."

    And how many people use Linux or Unix?  Not many.  Sure, I could see going with Nvidia for Linux drivers.  But most games will run a lot better on an AMD card running Windows than an Nvidia card running Linux.  The same is true if you swap "AMD" with "Nvidia", too.  These are gaming products, remember.

    Apparently you don't have enough other points to make, and had to bring up Linux in no fewer than six different paragraphs scattered throughout your post.

    "Or did it slip your mind that Nvidia Cards have Physics Processing incorporated into its cards with CUDA integration?"

    So we're at what, three significant games that use GPU PhysX so far, plus a handful of awful games that no one cares about?  And those few games that do use it only use it for some fancy effects that one could just as soon do without.

    How many people who are looking for a video card care about CUDA?  Even for video transcoding, someone who only needs to do a little bit of transcoding can just as well run it on a CPU and let the computer run overnight.  I guess a handful of people who use Adobe Creative Suite 5 might also care about CUDA temporarily, until they port it to OpenCL.

    "Are you a Xenophobe?"

    If all else fails, try insults?

    "A minority of the world's population play 3D computer games."

    Which is why a minority of the world's population cares in the slightest about the difference in performance between the various discrete cards on the market.  But if we are going to compare video cards, how about comparing them in the things that people will actually use them for?

    "In the real world I've encountered more artists and engineers who use nvidia cards."

    I thought we were talking about GeForce and Radeon cards.  Now you want to change the subject to Quadro and FirePro?

    "Macs used to be all ATI cards. Now 2/3rds their lineup are Nvidia."

    Someone really ought to tell Apple about that.

    http://store.apple.com/us/browse/home/shop_mac/family/imac?aid=AIC-WWW-NAUS-K2-STARTING-IMAC-INDEX&cp=STARTING-IMAC-INDEX

    Let's see here, you have your choice of a Radeon HD 4670, a Radeon HD 5670, or a Radeon HD 5750.  Where are the 2/3 of Nvidia cards?

    http://store.apple.com/us/browse/home/shop_mac/family/mac_pro?mco=MTM3NDc2NTk

    There's a Radeon HD 5770, upgradeable to a Radeon HD 5870.  But still no Nvidia cards.

    Apple laptops do include Nvidia integrated graphics, and fairly low end discrete Nvidia video cards.  AMD integrated graphics weren't an option here, as AMD doesn't make graphics for Intel processors.  Intel's mobile processors are a lot better than AMD's right now, and that is what drove Apple's decision on the integrated graphics.  And then for the higher end MacBook Pros, Apple used the same GPU, but this time on a discrete card instead of integrated.

    "I take the game configuration files and change the settings and then I have a shell script for each game that loads the files I want into ramdisk. I have a lot of customization on the OS itself that helps. Its true that Linux gives me a lower framerate than Windows"

    And I just install the game and it runs by default without having to tinker with things on my end.  And how exactly is that an advantage for you?

    "Around 10 - 25% lower ping in practically every single game Ive played comparing both."

    Whoa, Linux on your machine magically makes routers hundreds of miles away process your packets faster!  Impressive!  </sarcasm>

    "So yeah, you can go buy mighty hex cores and have most of those cores being used to retain all the crap you have to keep on to feel safe."

    All of the background security stuff added together adds up to a rounding error in CPU usage on one core.  Maybe that will be a slight problem on games that can actually push all four cores.  If there were any such games.

    "each new update is to plug some hole in IE"

    Who uses Internet Exploder, anyway?

    Yeah, Microsoft has to release a lot of patches to fix security holes in Windows.  Other OSes have security holes, too, though.  The reason they're "safer" is that they aren't targeted so much--because not many people use them.

    "this will be the last I will view or pay any mind to this thread."

    A long rant followed by something to the effect of "and I'm not going to read your reply".  If you do reply again, I'll quote you on that.

  • choujiofkonochoujiofkono Member Posts: 852

        My main reason for buying Nvidia cards is from experience.  I bought an ATI 9700pro for around $350 and it went bad within a year.  I returned this card for a new 9700pro and it also went bad within 4 months of return.  I returned this card and they said they no longer had the 9700pro and upgraded me to the 9800pro.  This was fantastic (sort of) until the 9800pro also went out a year later.  So that was enought BS for me, I have never bought another ATI card since and have never looked back.  I have never had an Nvidia card go bad on me since either so the 2 coincidences combine for me.  I don't know what the deal was at the time but they lost a good customer with their constant problems with those cards.  I have never had any driver issues with Nvidia either and that includes the installation of Ubuntu that I run my render farm off of.  For me the Nvidia cards have a lot more use also because of the CUDA functionality which is inherently more efficient than general GPU computing.  Not to mention physx as a little bonus. 

    "I'm not cheap I'm incredibly subconsciously financially optimized"
    "The worst part of censorship is ------------------"
    image

  • BenthonBenthon Member Posts: 2,069

    It looks like the GTX 580 underclocks itself in order to maintain low noise levels and temperatures at high loads, rather than having a "revolutionary" design Nvidia touted.

    http://gadgetsteria.com/2010/11/09/nvidia-gtx-580-downclocks-under-load-to-lower-temps-spoils-the-fun/

    He who keeps his cool best wins.

  • MehveMehve Member Posts: 487

    Originally posted by Benthon

    It looks like the GTX 580 underclocks itself in order to maintain low noise levels and temperatures at high loads, rather than having a "revolutionary" design Nvidia touted.

    http://gadgetsteria.com/2010/11/09/nvidia-gtx-580-downclocks-under-load-to-lower-temps-spoils-the-fun/

    It's known that both Furmark and OCCT get downclocked. Have yet to see any reports of this in games though. Also a little curious whether this is hardware of software based.

    Might be worth checking THIS page out. Folding@Home, which is a pretty brutal stress test in itself, with performance and power consumption figures. Better performance, but lower temps. So it can be stated that with fair degree of confidence that the downclocking isn't happening on anything less than extreme circumstances.

    A Modest Proposal for MMORPGs:
    That the means of progression would not be mutually exclusive from the means of enjoyment.

  • QuizzicalQuizzical Member LegendaryPosts: 25,353

    What matters is not the clock speeds for their own sake, but the performance.  My processor clocks down while playing Guild Wars because it thinks that counts as idle, but it doesn't hurt performance.

    Anandtech measures the temperatures while playing Crysis, and finds that the GTX 580 fixed the heat and noise problems of the GTX 480:

    http://www.anandtech.com/show/4008/nvidias-geforce-gtx-580/17

  • AmazingAveryAmazingAvery Age of Conan AdvocateMember UncommonPosts: 7,188

    AMD is making purchasing decisions for the high end market with the delay of the 6990 to Q1 2011 (maybe even March 2011!) announced in their Financial Analyst Day.

    Maybe Antilles is dual Barts after all and therefore Cayman (naming scheme now fits) hoping to be level with GTX580? Cayman @ 40% better than 5870 ain't gonna happen... Interesting times.

    I can easily pass the 580's bandwidth with some mild overlocks to my 480 (and have been since release).

    Wonder when an GTX 590 will come... and wondering about the 6990 now too. Probably both at Easter maybe.


    NVIDIA GeForce GTX 580 Review

     








    When the GTX 580 is directly compared to the GTX 480, it really is
    amazing to see what a few well thought out architectural tweaks can accomplish
    when teamed up with a flexible architecture. Make no mistake about it, an
    AVERAGE improvement of 18% over a high end card that was released about 7 months
    ago is no small feat and yet NVIDIA has done that and more. There were even
    several games where the GTX 580 displayed a 30% or higher increase in framerates
    (and yes, about 10% in others). Meanwhile, its highest increases seemed to be
    reserved for areas where it matters the most for enthusiast-branded products:
    high resolution, high image quality situations. This also highlights in sharp
    contrast one of the HD 5000 series’ failings: anti aliasing performance.


    Whole day nearly of newegg availability and only 2 brands sold out, hardly a paper launch...



  • QuizzicalQuizzical Member LegendaryPosts: 25,353

    Originally posted by AmazingAvery

    Maybe Antilles is dual Barts after all and therefore Cayman (naming scheme now fits) hoping to be level with GTX580? Cayman @ 40% better than 5870 ain't gonna happen... Interesting times.

    No.  Antilles is dual Cayman.  If Antilles were dual Barts, AMD could have launched it by now, as Barts is pin compatible with Cypress.  That is, you can take a board built for Cypress, stick a Barts GPU in it instead of Cypress, and it will work.

    I don't see why Cayman being 40% faster than Cypress "ain't gonna happen".  If Cayman can't improve on performance per mm^2 over Barts at all, a die size of 400 mm^2 would still make it more than 40% faster than Cypress.  Similarly, if Cayman can't improve on performance per watt over Barts at all, a TDP of 240 W or so would make it more than 40% faster than Cypress.  That's in the ballpark of the rumored die size, and less than the rumored TDP.  I rather doubt that Cayman is going to be markedly worse than Barts on both performance per mm^2 and performance per watt.

    Neither 400 mm^2 nor 240 W is terribly implausible, when you consider that Nvidia just launched a GPU with a die size of 520 mm^2 and the GTX 480 had an honest TDP of around 300 W; Nvidia's claimed 250 W TDP was a lie.  And if Cayman offers considerable architecture improvements, neither may even be necessary for Cayman to crush GF110.  Also, consider that Cayman was initially designed for a 32 nm process node, and backing up to 40 nm can mean as much as 56% more die size, so AMD could end up with a large die even if they didn't mean to.

    "AMD is making purchasing decisions for the high end market with the delay of the 6990 to Q1 2011 (maybe even March 2011!) announced in their Financial Analyst Day."

    Can you link to a source on that?  I wouldn't be the least bit surprised if it were true, but I can't find a source.  All along, Antilles was set to launch either when AMD had spare Cayman chips that they wanted to get rid of, or when AMD felt that they had to, whichever came first.  The first was never likely to happen this year, and it sounds like now that they've seen the GTX 580, they're not feeling pressure on the second, either.

    The point of Antilles isn't to be some awesome card that people will want to buy.  It's a marketing gimmick, to claim that they have the fastest single card on the market.  It's kind of like the Radeon HD 5970 in that respect.  When the 5870 launched, AMD unquestionably  had the fastest GPU on the market, but some people said, the GTX 295 was still faster.  The GTX 295 was itself a marketing gimmick, of course, but AMD responded with their own in the 5970.  (Of course, the GTX 295 was in response to the Radeon HD 4870 X2, which was in response to the GeForce 9800 GX2, which was in response to the Radeon HD 3870 X2...)

    So presumably AMD believes that they still have the title of fastest single video card wrapped up without needing to paper launch the Radeon HD 6990 to reclaim it.  That could be either a belief that the Radeon HD 5970 is still faster than the GeForce GTX 580, or the knowledge that the Radeon HD 6970 is going to be faster than the GeForce GTX 580.  AMD surely knows whether the latter is true, even if we don't.

    "I can easily pass the 580's bandwidth with some mild overlocks to my 480 (and have been since release)."

    Well sure, a respin usually doesn't offer that big of performance improvements, as all it can do is fix things that were broken in the original.  It's like noting that you can overclock a Core i7 965 to run faster than a Core i7 975 at stock speeds.  The latter is a base layer respin of the former (D0 stepping versus C0 stepping), which is the same as the relation between GF100 and GF110.  Base layer respins are pretty rare in video cards, as video card life cycles are usually short enough as to not have much of a point.  Indeed, there wouldn't have been one even for the disaster of GF100, either, except that TSMC canceled their 32 nm process node, and so the intended successor to GF100 couldn't be built.  But base layer respins happen all the time in processors, which have longer life cycles and no half nodes.

    "Wonder when an GTX 590 will come... and wondering about the 6990 now too. Probably both at Easter maybe."

    And what exactly is a GTX 590 supposed to be, apart from wishful thinking?  There isn't going to be a new process node until TSMC and Global Foundries get 28 nm HKMG up to speed; optimistically, we could be looking at next summer for real cards.  Their next architecture, Kepler, is going to be on 28 nm, as Nvidia has previously announced.  They just did a base layer respin, so even if they wanted to do another (a second base layer respin of a card already sold commercially might well be unprecedented in the whole history of video cards), that wouldn't be ready by Easter.  They've already had four metal spins, so there surely isn't much to gain there.  They can barely fit one GF110 on a card, so I don't see them putting two on there.  Even if they did, they've got no real hope of beating AMD on the maximum performance from two GPUs in a 300 W envelope, as GF110 trails well behind Cypress in performance per watt, let alone Barts.

    I guess they could introduce a new bin of factory overclocked GTX 580s and call it a GTX 590.  But that would offer nothing new over factory overclocked GTX 580s.  Maybe they will do that if Cayman barely beats a GTX 580, and try to restart the Radeon X800/GeForce 6800 binning wars and see whose press edition part is faster, but that doesn't end in real cards that are commercially available.  (e.g., GTX 580 < Radeon HD 6970 < GTX 590 < Radeon HD 6980 < GTX 595 < ... until one side can't find a hundred extra specially binned GPUs to beat the other side's press edition card)

    "Whole day nearly of newegg availability and only 2 brands sold out, hardly a paper launch..."

    It's not a paper launch, but we don't yet know if it's a soft launch or a hard launch.  Recall that the Radeon HD 5870 and 5850 were widely available for several days after launch, before flickering in and out of stock for the next two months.  Of course, that was because AMD couldn't get enough wafers from TSMC, as TSMC didn't have the needed production capacity.  Nvidia doesn't have that problem today--and, of course, there isn't the same demand for GF110-based cards as there was for Cypress-based cards.

    It's really a question of how soon Nvidia was confident enough that the GF110-A1 silicon was good enough for production launch, and ordered zillions of wafers of it.  There's good reason for that confidence once you've got silicon back and see that it works, but you can cut time to market in volume by a couple of months if you order a full production run when you send the design in and are only hoping it works.  The risk of that is, of course, that if the design is wrong and needs another spin, you just spent millions of dollars to buy wafers that get thrown in the garbage.  If Nvidia got GF110-A1 (which they could have called GF100-B1, but wanted to shed the stigma, like what Microsoft did in branding Windows NT 6.1 as "Windows 7" rather than a new version of Vista) back in August and promptly ordered a production run, then this is a hard launch.  If they taped it out in August and all they have are a relative handful of risk wafers for now, then they'll soon sell out until the real production run parts come back.  We'll find out soon enough, I guess.

  • AmazingAveryAmazingAvery Age of Conan AdvocateMember UncommonPosts: 7,188

    Originally posted by Quizzical

    Originally posted by AmazingAvery

    Maybe Antilles is dual Barts after all and therefore Cayman (naming scheme now fits) hoping to be level with GTX580? Cayman @ 40% better than 5870 ain't gonna happen... Interesting times.

    No.  Antilles is dual Cayman.  If Antilles were dual Barts, AMD could have launched it by now, as Barts is pin compatible with Cypress.  That is, you can take a board built for Cypress, stick a Barts GPU in it instead of Cypress, and it will work.

    I don't see why Cayman being 40% faster than Cypress "ain't gonna happen".  If Cayman can't improve on performance per mm^2 over Barts at all, a die size of 400 mm^2 would still make it more than 40% faster than Cypress.  Similarly, if Cayman can't improve on performance per watt over Barts at all, a TDP of 240 W or so would make it more than 40% faster than Cypress.  That's in the ballpark of the rumored die size, and less than the rumored TDP.  I rather doubt that Cayman is going to be markedly worse than Barts on both performance per mm^2 and performance per watt.

    "AMD is making purchasing decisions for the high end market with the delay of the 6990 to Q1 2011 (maybe even March 2011!) announced in their Financial Analyst Day."

     

    ----------------------------------------------------

     

      

    Page 21: http://phx.corporate-ir.net/External.File?item=UGFyZW50SUQ9Njk3NTh8Q2hpbGRJRD0tMXxUeXBlPTM=&t=1

     

    official naming scheme.

    Question is then is Antilles dual XT or dual Pro. Probably Pro. Reason why I said barts was comparing the large naming gap between 5870 and 5970 and the 6990 just over the top of the 6970.

     

    Antillies is a dual GPU single card there is a distinction, A GTX 580 simply is pretty much the same as AMD's current Dual GPU single card.

     

     

    .



  • QuizzicalQuizzical Member LegendaryPosts: 25,353

    Thanks for the link.  The other interesting part of the same slide is that it confirms Cayman for this quarter, which presumably means it's on time.  I didn't find the rumors about a delay for bad yields terribly credible, given that AMD demonstrably understands TSMC's 40 nm process well by now.  Cayman this quarter probably doesn't mean December, either, as OEMs hate it if you launch new cards right before Christmas, so that the reasonable card to buy when someone is buying a Christmas present is no longer the reasonable present to buy when it is opened.

    Reviews mostly found the Radeon HD 5970 handily beating the GeForce GTX 580 in games where CrossFire scaled properly, and losing in games where it didn't.  It's not that the 580 can match 5970 performance; it's that the 580 works right and the 5970 sometimes doesn't.  Taking an average of that isn't terribly meaningful.  Regardless, it's a reason to prefer the GTX 580 to a 5970.  Not that there ever was much of a reason to prefer the 5970 to anything, as two 5870s in CrossFire would be much nicer.  As I said, the 5970 a marketing gimmick.

    I expect Radeon HD 6970 performance to be fairly close to that of the GeForce GTX 580.  If I had to guess, I'd say the 6970 is faster, not slower, but wouldn't be surprised if I'm wrong about that.  I also expect AMD to price it quite a bit cheaper, as it will probably be a lot cheaper to build.  I'm not upgrading this generation, but even if I were interested in a GTX 580, I'd wait for Cayman to launch.  If AMD can offer the same performance for $400, Nvidia might well slash prices to match.

  • QuizzicalQuizzical Member LegendaryPosts: 25,353

    Apparently Nvidia wasn't able to fix idle power consumption with two monitors attached in the GeForce GTX 580:

    http://www.legitreviews.com/article/1461/19/

    That's still around 100 W for just the video card, which is obscene.  The Radeon HD 5850 and 5870 have increased power consumption with two monitors attached, but that's still only 40-50 W at idle.  With a video card idle most of the time, that could easily add a few dozen dollars per year to your electricity bill as compared to other high end cards.

  • GithernGithern Member Posts: 79

    I'm good with what my Radeon 5770 pushes out. There is such a thing as overkill.

  • AmazingAveryAmazingAvery Age of Conan AdvocateMember UncommonPosts: 7,188
  • eyeswideopeneyeswideopen Member Posts: 2,414

    I'm just happy it came out so prices get dropped on the other cards. The GTX 460 is going for $129 now, so I may finally be able to get a good upgrade.

    -Letting Derek Smart work on your game is like letting Osama bin Laden work in the White House. Something will burn.-
    -And on the 8th day, man created God.-

  • wickedptwickedpt Member Posts: 45

    LOTRO is already using DX11 ... dont see why it wouldn't use some kind of tesselation.

  • AmazingAveryAmazingAvery Age of Conan AdvocateMember UncommonPosts: 7,188

    Anandtech has a SLi review up today too: http://www.anandtech.com/show/4012/nvidias-geforce-gtx-580-the-sli-update

    Their results also show that the GTX 580 is quieter than an 5870, idle temps is slower, and total system temp when running Crysis is just 2 degrees higher against the same card, pretty sweet!



  • QuizzicalQuizzical Member LegendaryPosts: 25,353

    The lower idle temperature and power consumption is only if you have only one monitor attached.  With two monitors attached, it completely fails to underclock at all.  But yeah, other than that and the chip being really late, Nvidia fixed the problems with GF100.

    It's interesting that two GTX 580s in SLI meant more than four times the noise of one.  Two Radeon HD 5870s in CrossFire meant less than double the noise of one.  Two Radeon HD 6870s in CrossFire meant a little more than double the noise of one.  Two Radeon HD 6850s in CrossFire mysteriously meant 18 times the noise of one.  Two GeForce GTX 460s in SLI meant over 10 times the noise of one.  Those for those last two, it's largely because just one card was really quiet.  This makes me wonder about case airflow, internal/external exhaust, and card spacing.

    The lack of a stock voltage is weird.  If it's lower leakage parts getting higher stock voltages, it could make sense.  If it's whatever voltage it takes to reach a given level of performance, then power consumption could vary considerably from one card to the next.  Nvidia did the same thing with the GTX 460, too.  At the time, I took it as their effort at coping with bad yields by broadening what can meet their specs.

    The most interesting page of that is the third one.  7% more of most of the components plus Nvidia's claimed architecture tweaks lead to about an 8% improvement in performance on average.  Nvidia had claimed architecture improvements brought a big improvement by themselves, but that's belied by the performance numbers.  They probably fixed some glitch(es) that made some feature(s) not work so that it had to be disabled in GF100, but that's presumably about it.  Maybe the 16-bit floating point performance that they're talking about was broken in GF100.

    As someone on another forum pointed out, if it's really part of a new architecture, then why didn't GF110 gain the HDMI audio bitstreaming feature that all of the Fermi chips except for GF100 had?

Sign In or Register to comment.