Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!

Game on: Nvidia new cards a couple months away

AmazingAveryAmazingAvery Age of Conan AdvocateMember UncommonPosts: 7,188

You can find the white paper from todays presentation here: www.hardocp.com/article/2009/09/30/nvidias_fermi_architecture_white_paper

Nvidia's strategy includes a high end dual-GPU configuration that should ship around the same time as the high end single-GPU model. A top to bottom launch.

The card itself, not looking as big as ATI's offering yet with 3 billion transistors!

 

More pictures and and some stats here:Pictures

mention that the Fermi GPU can access upwards of 12 GB of GDDR5 RAM with 4 Gbit chips and 6 GB of GDDR5 RAM with 2 Gbit chips. So far, only 1 Gbit chips exist. Memory width is also 384-bits wide. So expect odd memory configurations again like 768 MB, 1.5 GB, etc.

"For Fermi, Nvidia was clearly aiming to maximize double precision FP performance and embraced the additional required complexity. Each core can execute a DP fused multiply-add (FMA) warp in two cycles by using all the FPUs across both pipelines. Significantly, this is the only warp instruction that requires using both issue pipelines, which suggests certain implementation details. While Nvidia’s approach precludes issuing two warps, instead of a 8:1 ratio of SP:DP throughput (or 12:1 counting the apocryphal dual issue), the ratio for Fermi is 2:1. This is on par with Intel and AMD’s implementation of SSE and ahead of the 4:1 ratio for AMD’s graphics processors. Academic research suggests that the area and delay penalties for this style of multiple precision FPU are approximately 20% and 10% over a single DP FPU [2], but it is likely that Nvidia’s overhead is somewhat lower. The end result should be an order of magnitude increase in double precision performance at the same frequency – quite a leap forward for a single generation. "

"Software   :  www.realworldtech.com/page.cfm



All the advances in programmability are interesting, but fundamentally rely on Nvidia’s software team to unlock them for developers. 

The standard APIs are obvious candidates here – Fermi is designed for the major ones: DX11 and DirectCompute on Windows, OpenCL and OpenGL for the rest of the world. OpenCL 1.0 is relatively nascent, having been only recently finalized, and DX11 and DirectCompute are not yet out. While these are unquestionably the future for GPUs, OpenCL and DirectCompute lack many of the niceties that Nvidia offers with the proprietary CUDA environment and APIs.

CUDA is generally focused on providing language level support for GPUs. This makes sense as it leverages some familiarity on the part of developers. But the reality is that the languages which CUDA supports are variants of the original languages with proprietary extensions and only a subset of the full facilities of the language. Currently, Nvidia has CUDAfied C and Fortran, and in the future with Fermi, they will have a version of C++. Nvidia's marketing is makinig ridiculous claims that they will eventually have Python and Java support, but the reality is that neither language can run natively on a GPU. An interpreted language, such as Python would kill performance, and so what is likely meant is that Python and Java can call libraries which are written to take advantage of CUDA.

Despite being proprietary, the ecosystem that Nvidia is creating for CUDA developers is promising. While it’s not the rich ecosystem of x86, ARM or PPC, it is miles ahead of OpenCL or DirectCompute. Some of the tools include integration with Visual Studio and GDB, a visual profiler, improved performance monitoring with Fermi, and standard binary formats (ELF and DWARF). Nvidia also has their own set of libraries, which can now be augmented with 3rd party libraries that are called from the GPU.

Now that standards-based alternatives such as OpenCL exist, CUDA is likely to see slower uptake. Many customers have learned to avoid solutions from a single source, IBM for example with Intel’s x86 chips in the original PC. But CUDA will retain strategic importance to Nvidia as a way to set the pace for OpenCL and DirectCompute.

Conclusions

Fermi's architecture is a clear move towards greater programmability and GPU based computing. There is a laundry list of new features, all of which will enable Fermi, when it is released to make greater inroads into the relatively high margin HPC space. Some of the more notable changes include updating the programming and memory model, embracing semi-coherent caches and improved double precision performance and better IEEE compliance. It's clear that Nvidia is making a multi-generation investment to push GPU computing in the high-end, although we will have to wait until products arrive to determine the reception.

Since there are no details on products, many key performance aspects are unknown. Frequency is likely in the same range (+/-30%) as GT200, and the GDDR5 will probably run between 3.6-4.0GT/s, but power and cooling are unknown and could be anywhere from 150-300W. The bandwidth and capacity for a DDR3 based solution is also unknown. So from a performance stand point, it's very hard to make any meaningful comparison to AMD's GPU, which is actually shipping. The shipping dates for graphics and compute products based on Fermi are also unclear, but late Q4 seems to be the earliest possible with low volumes, while actual volume won't occur till 2010 - so evaluating performance will have to wait till then.

Perhaps the most significant demonstration of Nvidia's commitment to compute is the fact that a great deal of the new features are not particularly beneficial for graphics. Double precision is not terribly important, and while cleaning up the programming model is attractive, it's hardly required. The real question is whether Nvidia has strayed too far from the path of graphics, which again depends on observing and benchmarking real products throughout AMD's, Nvidia's and Intel's line up; but it seems like the risk is there, particularly with AMD's graphics focus.

These are all important questions to ponder in the coming weeks, and really feed into the ultimate technical question - the fate of CPU and GPU convergence. Will the GPU be sidelined for just graphics, or will it be like the floating point coprocessor, an essential element of any system? Will it get integrated on-die, and to what extent will the discrete market remain? These are all hard to predict, but it's clear that Nvidia is doubling down on the GPU as an integral element of the PC ecosystem for graphics and compute and time will tell the rest. "



 

"With these requests in mind, the Fermi team designed a processor that greatly increases raw

compute horsepower, and through architectural innovations, also offers dramatically increased

programmability and compute efficiency. The key architectural highlights of Fermi are":

• Third Generation Streaming Multiprocessor (SM)

o 32 CUDA cores per SM, 4x over GT200

o 8x the peak double precision floating point performance over GT200

o Dual Warp Scheduler that schedules and dispatches two warps of 32 threads

per clock

o 64 KB of RAM with a configurable partitioning of shared memory and L1 cache

• Second Generation Parallel Thread Execution ISA

o Unified Address Space with Full C++ Support

o Optimized for OpenCL and DirectCompute

o Full IEEE 754-2008 32-bit and 64-bit precision

o Full 32-bit integer path with 64-bit extensions

o Memory access instructions to support transition to 64-bit addressing

o Improved Performance through Predication

• Improved Memory Subsystem

o NVIDIA Parallel DataCacheTM hierarchy with Configurable L1 and Unified L2

Caches

o First GPU with ECC memory support

o Greatly improved atomic memory operation performance

• NVIDIA GigaThreadTM Engine

o 10x faster application context switching

o Concurrent kernel execution

o Out of Order thread block execution

o Dual overlapped memory transfer engines

 

Glad holding off ATI 58XX purchase still! (Not that ATI is bad or anything)



Comments

  • neoterrarneoterrar Member Posts: 512

    tech speak, jumble mumble, goobie goo.

    One day video card releases might give me pause. Nah, probably not.

    If all it ends up meaning is higher framerates and resolutions, it's snore.

     

  • AmazingAveryAmazingAvery Age of Conan AdvocateMember UncommonPosts: 7,188

    anandtech has some conclusions based on todays released info here:  anandtech.com/video/showdoc.aspx

    Interesting read.



  • QuizzicalQuizzical Member LegendaryPosts: 25,355

    So basically, the point of the architecture is that Nvidia is adding a zillion things that are irrelevant to >99% of the people who are willing to pay at least $100 for a video card.  And this is supposed to cast them in a flattering light?

    Certainly, for the sort of institutions that buy super computers, Fermi is awesome news.  But the real question for people trying to decide between getting a Radeon HD 5870 now or waiting for Fermi is, how well will it play games?  And Nvidia doesn't want to talk about that.  You don't downplay gaming performance if your cards blow away the competition at it.

    Also, your articles contradict your thread title.  From the first page of the AnandTech one:

    "Then timing is just as valid, because while Fermi currently exists on paper, it's not a product yet. Fermi is late. Clock speeds, configurations and price points have yet to be finalized. NVIDIA just recently got working chips back and it's going to be at least two months before I see the first samples. Widespread availability won't be until at least Q1 2010."

    If everything goes right for Nvidia from here out, you'll be able to pick up a Fermi early next year.  And if Murphy's Law kicks in, the wait could be rather far into next year.  That's a lot more than "a couple months away".  Well, unless you were really talking about the low end cards in the GT 200 line that are likely to release soon, without DirectX 11 and the various other next generation goodies that ATI already has for sale.

    Certainly, the timing doesn't matter to someone who isn't going to buy a new card until a year or two from now.  But someone expecting to have the opportunity to buy a Fermi at the end of November is going to be disappointed.

  • AmazingAveryAmazingAvery Age of Conan AdvocateMember UncommonPosts: 7,188

    Quite a lot of places figure it is a couple of months away. End Q4 or start Q1'01. As there is not full on DX11 game available yet (battleforge "patch update") notwithstanding, what is the rush to get a 5870? They are not exactly priced for the mainstream gamer but it is the only dx11 card out and it does offer the fastest soloution for single GPU at the moment. Nvidia publically stated that Fermi will beat a 5870 - I think that speaks for itself in wondering if it will or won't do games good. The info released shows those released specs on paper will back that claim up.

    For anyone in the market for a high end card such as myself, then the most reasonable and logical thing to do is wait for them to come out as it really is not a long way a way. Considering other dx11 games have been pushed back until next year too.

    Simply put I bet the new Nvidia card will trounce the 5870, and then ATI will rush out the 5890 which will come close. And then comes 5870x2 back to ATI (but dual card) and then the dual card for Nvidia. In the meantime there is strong suggestion from Nvidia from the webcast from the event today on their website and also from several other media reports "to be suprised at the price these will come in at". That remains to be seen.

    Today was proof the card exists.

    Right now games that are being put out typically have setting to take advantage of these new cards. And if your a gamer you will typically want the manufactor of your GPU whomever that may be to help in giving the ability have all these advanced features available.

    When I read things such as Nvidia working with companies to make this happen and ATI confusing this extra help as a ploy to not allow simple things like natively enabling and adjusting AA in setting coupled with false accusations it makes me think twice about looking at which company serves my needs best in the games I like to play. It is well known that Nvidia makes an effort to work closely with companies to achieve these goals. I can point to FC working with Nvidia closely for testing, and even some Nvidia guys spending good time with the coders there. Of course money changes hands in these situations, ATI with $6million in for Dirt 2 (suspect that was out of desperation to get a dx11 title though) and then endorsements on top from either company.

    As for the forthcoming new 40nm dx10 cards from nvidia coming soon I did not mean that, but then again when they release ATI won't have an answer until the new year again most likely with their dx11 price market entry levels for this area.

    It is all tick / tock. We are still on the tick with Nvidia to put its new stuff out at this particular level in the high end. I think announcement was all about using the GPU as a cGPU. It was to show everyone that there is much more to the GPU than just rendering pretty pictures. The white paper shows this and the presentation the benefits. More things about this on their website: www.nvidia.com/object/fermi_architecture.html#experts



  • CleffyCleffy Member RarePosts: 6,412

    I actually don't find the architecture update too impressive.  Most of it adds what ATI added several generations ago.  It has alot higher gains just because of the tech behind it.  However, its been nVidia themselves that have kept the techs potential down.  It will be interesting to see how these new cards stack up as nVidia has been buying off developers not to utilize it in the past.  Now that there are no more hinderances on ATI's technology, I think we could see a significant markup in performance for ATI 3xxx, 4xxx, and 5xxx cards.

  • Loke666Loke666 Member EpicPosts: 21,441

    I understand that people buy the fastest card now because there will always be something better around the corner. If you just wait you can wait for years because something better is bound to come out soon.

    Very interesting reading though, Avery (I like tech mumbo-jumbo). Thanks for posting it.

    I think my GTX 295 will have to do for a while anyways. Well, 6 months at least or until I find a game I can't max out :)

    And the reason I bought the 295 was that it performed best, I don't care what it costed me. When I next spring get a new card I will use the same criteria (unless my work fire me or something else that hurt my financial situation of course).

  • Loke666Loke666 Member EpicPosts: 21,441
    Originally posted by Cleffy


    I actually don't find the architecture update too impressive.  Most of it adds what ATI added several generations ago.  It has alot higher gains just because of the tech behind it.  However, its been nVidia themselves that have kept the techs potential down.  It will be interesting to see how these new cards stack up as nVidia has been buying off developers not to utilize it in the past.  Now that there are no more hinderances on ATI's technology, I think we could see a significant markup in performance for ATI 3xxx, 4xxx, and 5xxx cards.

    And still won Nvidias last gen over ATIs last gen.  www.tomshardware.com/charts/gaming-graphics-cards-charts-2009-high-quality/Sum-of-FPS-Benchmarks-1920x1200,1473.html

  • decoy26517decoy26517 Member Posts: 313

    MONTHS?

    But we need them now! So that we can play all of these... DX11 games that are coming out... soon.

    "World of Warcraft is the perfect implementation of this genre." - Hilmar Petursson. CEO of CCP.

  • CleffyCleffy Member RarePosts: 6,412
    Originally posted by Loke666

    Originally posted by Cleffy


    I actually don't find the architecture update too impressive.  Most of it adds what ATI added several generations ago.  It has alot higher gains just because of the tech behind it.  However, its been nVidia themselves that have kept the techs potential down.  It will be interesting to see how these new cards stack up as nVidia has been buying off developers not to utilize it in the past.  Now that there are no more hinderances on ATI's technology, I think we could see a significant markup in performance for ATI 3xxx, 4xxx, and 5xxx cards.

    And still won Nvidias last gen over ATIs last gen.  www.tomshardware.com/charts/gaming-graphics-cards-charts-2009-high-quality/Sum-of-FPS-Benchmarks-1920x1200,1473.html



     

    I'm commenting on the negative development nVidia has been doing since the 8xxx series cards.  Since 2006 nVidia hasn't really advanced much technologically, and they have been paying off developers to not make cards run well with new tech.  Its going to take a little while for my prediction to come true.

    The main reason I say this is because of how a nVidia Processing unit works and how an ATI Processing unit works.  A nVidia does 1 calculation at a time with 1 piece of information.  Like vertex(n).X.  On the other hand ATI's processes 4 similiar pieces of information at the same time.  Like vertex(n).X vertex(n).Y vertex(n).Z.  This means it has a theoretical performance gain of 300% when calculating common graphics information like XYZ and RGBs.  CAD programs currently make use of this to calculate a much higher number of polies.  This tech has been on AMD cards for generations.  However, game developers have not made use of it conforming to nVidia's 1 piece of information at a time method.  The next nVidia card will also process multiple items in calculations similiar to an ATI card, and will come with double the amount of processing units.  Thus the quoted 8 times more processing power.  However, if they are no longer holding back developers from using hundreds of multi-processing units, then you will see a massive performance increase on ATI cards that have been held back for years.

    Its like if nVidia makes a DX11 part.  When developers start to adopt DX11, old ATI DX10.1 parts will see even better visuals and better performance then old DX10 parts since DX11 is backwords compatible with DX10+

  • QuizzicalQuizzical Member LegendaryPosts: 25,355

    "Quite a lot of places figure it is a couple of months away. "

    Quite a lot of places figure that the Radeon HD 5000 series will be out in July 2009 or thereabouts.  You can find them easily with a Google search.  The time gap between ATI demonstrating a working DirectX 11 card and retail launch of DirectX 11 cards was about four months.  Nvidia can't even do the former yet.

    "They are not exactly priced for the mainstream gamer but it is the only dx11 card out and it does offer the fastest soloution for single GPU at the moment."

    Juniper is due out in Q4 2009, and rumored to be out in about two weeks or so.  That should be somewhere in the $100-$200 range.  At the lower end, Redwood and Cedar are due out in Q1 2010.  For comparison, Nvidia still doesn't have low end cards out for their GT 200 line, let alone GT 300.

    "Nvidia publically stated that Fermi will beat a 5870 - I think that speaks for itself in wondering if it will or won't do games good."

    They also stated that a GeForce GTS 250 beats a Radeon HD 5870 because it can do PhysX.  Meanwhile, Intel says that Larrabee will obliterate the competition when it releases.  It's easy to spin when you don't have to back it up.

    "Simply put I bet the new Nvidia card will trounce the 5870, and then ATI will rush out the 5890 which will come close. And then comes 5870x2 back to ATI (but dual card) and then the dual card for Nvidia."

    That's got the chronology all wrong.  ATI has Hemlock coming out in Q4 2009, possibly within a month or so.  That should easily beat Fermi to market.  Meanwhile, about the only way that ATI could release a 5890 anytime soon is if they decide to brand high binned Cypress chips as that, or the 6-monitor variant (Trillian), which hasn't been given a release date (or quarter) by ATI, or perhaps if they have some other chip that they haven't told anyone about, which is unlikely.

    "Today was proof the card exists."

    I don't think that was ever in doubt.  I don't think there's any real doubt that the projects to build the Radeon HD 6000 or 7000 series exist, either, nor what might end up being called the GT 400 or 500.  As proof goes, though, this was somewhat shy of Intel's demonstration of Larrabee, which actually had it run something instead of just holding it up and saying, see, here it is.

    "When I read things such as Nvidia working with companies to make this happen and ATI confusing this extra help as a ploy to not allow simple things like natively enabling and adjusting AA in setting coupled with false accusations it makes me think twice about looking at which company serves my needs best in the games I like to play."

    The accusation in question isn't that Batman: Arkham Asylum is optimized mainly for Nvidia cards.  That's fair game.  It's not even about the game using PhysX, which ATI cards can't run at all.  That's still fair game.  The objection is that the game checks to see if the card is made by ATI or Nvidia, and if it's ATI, the game disables anti-aliasing, even though it runs just fine on an ATI card except for the software check to artificially disable it.  That's sabotage.

    It's kind of like a web site checking your browser, and if it sees that you're using Firefox, only showing a blank page--even if Firefox could render the script that it sends any other browser just fine.

    "As for the forthcoming new 40nm dx10 cards from nvidia coming soon I did not mean that, but then again when they release ATI won't have an answer until the new year again most likely with their dx11 price market entry levels for this area."

    To the contrary, ATI's answer to those cards is already on the market, and has been for a long time.  Nvidia is just now trying to catch up.  One review found that the Radeon HD 4350 (released in 2008) gives better performance than the GeForce G 210 while using less power, in spite of being a full node larger die size.  They're both DirectX 10.1, so Nvidia can't claim any advantages there.  The GeForce GT 240, should it ever actually release, is basically Nvidia's answer to the Radeon HD 4770 that released in April.  They're both 40 nm, DirectX 10.1, and probably fairly comparable performance.  It's quite plausible that ATI could even pull the Radeon HD 4770 off the market for being obsolete and to make room for Juniper before its Nvidia equivalent arrives at retail.  The GeForce GT 220 might perform somewhere around the Radeon HD 4650 or 4670 (also released in 2008), and could have lower power consumption, though that the GeForce G 210 failed to do that as compared to the Radeon HD 4350 makes that far from automatic.

    -----

    As for why I care about when it releases, I want to get a new computer this month.  Upgrading my old one isn't a viable option, as my motherboard doesn't have a PCI Express slot.  Lynnfield is out, Windows 7 is coming soon, solid state drives are more or less ready, and I won't want to hold off on building the entire computer to wait several months for an Nvidia card that, at best, will be slightly better performance at the price I'm willing to pay, and might actually be worse.  If Fermi does dramatically outperform the Radeon HD 5850 that I want to buy, it will probably also cost a lot more, so I wouldn't buy Fermi anyway.

  • QuizzicalQuizzical Member LegendaryPosts: 25,355

    I'll at least grant that Nvidia really believes in CUDA and PhysX, and that they're not just some stopgap marketing talking points until they can get out real competitors to ATI's Radeon HD 5000 series.  If it was just marketing hype, Fermi wouldn't be so heavily optimized for them, even at the expense of gaming performance.

    http://www.xbitlabs.com/news/video/display/20090930223729_First_Images_of_Nvidia_GeForce_Fermi_Show_Up.html

    Six pin and eight pin power connectors?  That's the same setup as the Radeon HD 5870 X2 and the GeForce GTX 295.  The top single GPU cards from the previous generation didn't require that much power.

    And only one monitor connector?  Note that it needs not just a full slot for an exhaust vent, but most of a second slot, too.

    I realize that it's just a mock up of a card, and not an actual working card.  I also realize that it's a Tesla, rather than a gaming card.  Still, that's more than the previous Tesla needed.  Perhaps most importantly, they can't actually know what the chip clock speeds will be, let alone how much power it will take, until they actually have working parts.  But they seem to be estimating that it will take quite a lot.

     

  • dfandfan Member Posts: 362

    There's a high possibility that the presented card is a fake. Reasons:

    1: The circuit board is sawn from behind. PCIE power connectors and and stickers are cut in half.

    2: There's no solder points nor attachment clips for 8 pin pcie connector at all.

    3: Same thing for 6 pin connector.

    4: Sli bridge doesn't fit.

    5: Another dvi connector?

    6: The cooler is fit only by backplate, not through holes around chip like usually.

    7: Empty cooler connector hole.

    8: The upper holes for air ducts are blocked.

     

    My guess is that the card is still very far release and pressure caused by ati's 5k series made nvidia try to show something concrete hoping people will not notice.

  • noquarternoquarter Member Posts: 1,170


    Originally posted by dfan
    There's a high possibility that the presented card is a fake. Reasons:
    1: The circuit board is sawn from behind. PCIE power connectors and and stickers are cut in half.
    2: There's no solder points nor attachment clips for 8 pin pcie connector at all.
    3: Same thing for 6 pin connector.
    4: Sli bridge doesn't fit.
    5: Another dvi connector?
    6: The cooler is fit only by backplate, not through holes around chip like usually.
    7: Empty cooler connector hole.
    8: The upper holes for air ducts are blocked.
     
    My guess is that the card is still very far release and pressure caused by ati's 5k series made nvidia try to show something concrete hoping people will not notice.

    Interesting, I actually thought something looked strange about the card when I saw it that made it feel like a mock up.. is there a full write up on this someplace?

  • dfandfan Member Posts: 362
    Originally posted by noquarter


     

    Originally posted by dfan

    There's a high possibility that the presented card is a fake. Reasons:

    1: The circuit board is sawn from behind. PCIE power connectors and and stickers are cut in half.

    2: There's no solder points nor attachment clips for 8 pin pcie connector at all.

    3: Same thing for 6 pin connector.

    4: Sli bridge doesn't fit.

    5: Another dvi connector?

    6: The cooler is fit only by backplate, not through holes around chip like usually.

    7: Empty cooler connector hole.

    8: The upper holes for air ducts are blocked.

     

    My guess is that the card is still very far release and pressure caused by ati's 5k series made nvidia try to show something concrete hoping people will not notice.

     

    Interesting, I actually thought something looked strange about the card when I saw it that made it feel like a mock up.. is there a full write up on this someplace?

    Haven't seen, I gathered the info from Finnish hardware site.

  • QuizzicalQuizzical Member LegendaryPosts: 25,355
    Originally posted by noquarter


    Interesting, I actually thought something looked strange about the card when I saw it that made it feel like a mock up.. is there a full write up on this someplace?



     

    Here's a rather definitive demolition of the card:

    http://www.semiaccurate.com/2009/10/01/nvidia-fakes-fermi-boards-gtc/

    Apparently Nvidia is claiming it's real.

     

  • AmazingAveryAmazingAvery Age of Conan AdvocateMember UncommonPosts: 7,188

    The one shown was a Tesla card Mock up and not Gefore - however, it should ship in 2009 . Fermi itself is slightly smaller than GT200. All the demo's at the event were run on fermi in real time.

    Geforce general manager confirms that it will be faster than 5870.

     



  • QuizzicalQuizzical Member LegendaryPosts: 25,355

    If some PR flack says it, it must be true?  Let's not forget that Intel claims that Larrabee will blow away the competition, too.

    Of course, what Nvidia may have meant is, Fermi beats anything ATI has in PhysX.  After seeing Nvidia's "power of 3" promo, I half expect them to come out with benchmarks showing that a Fermi paired with a Core i7 processor beats a Radeon HD 5870 paired with a Phenom II at games that are processor-bound.  That wouldn't be so different from their trumpeted announcement that two GeForce GTX 280Ms in SLI beat two Mobility Radeon HD 4870s if you disable one of them.

  • AmazingAveryAmazingAvery Age of Conan AdvocateMember UncommonPosts: 7,188
    Originally posted by Quizzical


    Of course, what Nvidia may have meant is, Fermi beats anything ATI has in PhysX.  



     

    Well actually, no, I have read several places now from different Nvidia guys saying the same thing. Obviously they have had a look at 5870 and are confident, and with recent specifications out, I don't blame them.

    More like Fermi at the opposing entry card at 5870's level simply beats it and we shall see the price it comes it at soon enough.



  • QuizzicalQuizzical Member LegendaryPosts: 25,355

    Well of course they're going to say they think they can make something faster.  Suppose that they saw the ATI card and said, whoa, that's faster than what we were hoping our card would be.  What do you think they would say publicly?  Oh no, we're doomed?  Of course not.  They're going to say that their card will still be better.  Therefore, that they say their card will be better is meaningless.

    I do think that at the high end, the top Fermi card will be a faster card than the Radeon HD 5870.  I also think it will draw a lot more power and cost a lot more, to the extent that the ATI cards will be a better value at just about every price point under $500, and some above that, too.  The reasons for better performance, more power, and a higher price tag are all the same:  die size.  If you have 40% more transistors than your competitor, it doesn't take miraculous feats of engineering to produce a faster card.

  • AmazingAveryAmazingAvery Age of Conan AdvocateMember UncommonPosts: 7,188

    "Nvidia plans three Fermi cards at launch

    High end, dual and performance

    We learned that towards the end of 2009 Nvidia plans to launch three Fermi architecture based cards. The one that Nvidia talked about, is the high end single chip card and this one was said to to have 16 clusters and total of 512 shader cores. Let’s call this it Fermi GTX.

    The dual card will naturally have two of these cores and likely a bit less than 2x512 shader cores simply as it has to operate with a TDP of around 300W. Heat is a big enemy of semiconductors and once you get up to 3.1 billion transistors even with small 40nm gates, you still end up with a lot of heat. If we could come up with a codename for better understanding, it would be Fermi GX2.

    The third card, something that we can call Fermi GT, will be a performance card, and it could be the new 8800GT. Nvidia expects a lot of sales from this card and this one would go up against Radeon 5850.

    This should be it for 2009 and in the first three months of 2010 we would expect the remaining members of Nvidia's Fermi DirectX 11 generation, including entry level and mainstream cards. Notebook stuff based on Fermi comes much later in 2010. "

    December is not far away, very interested in seeing the pricing here, I think the HPC dual one will be in the $600 area, High end single $400 ish and then scale down just in line with ATI's. There was a 6 month gap between ATI launching it's DX10 cards and Nvidia being ahead that 6 month period, they are not as far behind as some people wish to think with DX11 and there is reasoning behind it. When R600 launched late it wasn't that great and all things leading up to this periods / generation release are built upon previous ones.

    With doubling of core GPU essentials the 5870 doesn't get you double the bandwidth of the 4800 series, very disappointing. 140gb/s for 5870 with Fermi single soloution proposed at 225-240gb/s based on tech info released so far. A 4870 was 115gb/s not much of a jump there.

    Looking at Fermi again it would be double real life bandwidth but a significant amount over 5800's most likely at all levels. The 8800GT sold better and was more popular at its level compared to ATI competition price point.

    Here in Canada a 5850 is $300 and 5870 $425. On the other hand a 4870x2 was well over $600 here for a long time (I brought one) I am expecting the dual high end to be the same from both companies, scary when you look back at G80 prices.

    I this previous opposition to, "we don't know when Fermi will release" and "ATi has better price to performance" are just going to get paired up against nvidia's new offering and it is more than resonable to see what is dealt out at what price. As for power some other info released today looks good for nvidia: brightsideofnews.com/news/2009/10/1/nvidia-gt300-real-power-consumption-revealed-not-300w!.aspx

    If it turns out Nvidia's cards are $20-$50 more expensive I'd rather make that investment for a longer lifetime of the card.



  • QuizzicalQuizzical Member LegendaryPosts: 25,355

    There's a lot of room that could qualify as "not 300 W" while still being a lot more power than a Radeon HD 5870.  BSN* in your link is estimating about 240 W, and yeah, I'd say that's a lot more than 188 W.  It doesn't talk about idle power, either, and the 27 W at idle for the Radeon HD 5870 is quite impressive.  The Radeon HD 4870 was 90 W at idle, though that was really pathetic.

    Yes, Fermi will almost surely have more memory bandwidth than Cypress.   You note that the Radeon HD 5870 didn't double the memory bandwidth of the Radeon HD 4870, but neither does Fermi come anywhere near doubling the memory bandwidth of the GeForce GTX 285.

      The question is how much that will matter, and AMD seems to be betting, not much.  Recall that the Radeon HD 5850 did more than double the memory bandwidth of the Radeon HD 4850.  It's not that the 5850 has huge amounts of memory bandwidth; it's that the 4850 didn't have much for a performance card.  The Radeon HD 4870 had nearly double the memory bandwidth of the Radeon HD 4850, with the same GPU chip clocked 20% faster--and tended to perform about 20%-30% better with the same amount of memory (both came in 512 MB and 1 GB versions).  See, for example, this review, which while nominally of some particular card, is chosen mainly because it gives data on the 512 MB 4850, 512 MB 4870, and 1 GB 4870 all at stock speeds.  The 1 GB 4850 overclocks the shaders slightly (about 4%), but actually underclocks the memory by more than 5%, which should really hurt the card if it's limited by memory bandwidth:

    http://www.hardwarecanucks.com/forum/hardware-canucks-reviews/16854-gigabyte-radeon-hd-4850-1gb-passive-review.html

    So was the Radeon HD 4850 hurt by a lack of memory bandwidth?  Yes, a little.  But not a lot.  What stands out a lot more is the cases where 512 MB of memory just isn't enough, regardless of bandwidth.  Will the Radeon HD 5850 be hurt by lack of memory bandwidth, with more than twice as much as the 4850 had?  Probably not much.  And the Radeon HD 5870 has 20% more memory bandwidth than the 5850, even.

    If you're not looking to buy a new video card until well into next year, then sure, wait for Fermi to see how it is, or at least wait to see if release looks imminent when you're finally looking to upgrade.  But expecting the card to be as widely available in December as the Radeon HD 5870 is right now is improbable.  A paper launch in December is plausible, but a paper launch doesn't do you any good if you want to actually buy one.  If Fermi were only two months away from a hard launch, would Nvidia have really given a presentation with a fake card that they tried to pass off as real?

    -----

    Sure, the GeForce 8800 GT sold a lot better the Radeon HD 2900 XT.  That's because it was a much better card, and back when both ATI and Nvidia were both pursuing the strategy of making really expensive cards with huge dies.  The Radeon HD 2900 XT would have been a good value at half the price of the GeForce 8800 GT, but it cost far too much to make, so ATI couldn't slash prices like that even if so inclined.

    If Cypress has only 70% of the die size of Fermi, it probably costs less than 70% as much to build.  If Fermi beats the Radeon HD 5870 performance by 20% and Nvidia tries to sell it for $500, ATI would still offer better performance per dollar.  If Nvidia tries to sell it for $400, AMD could slash prices to $300 and still offer better performance per dollar.  AMD's strategy is to make cheaper cards so they can slash prices to compete if they have to.  Nvidia doesn't have that option with an enormous die size, just as ATI didn't in the days of the Radeon HD 2000 series.

    In order to offer better performance per dollar cost to build the chip, Fermi has to not just beat Cypress, but beat it by at least about 50%.  While that could happen, it's likely that it won't.  The cost to build the chip matters, too, as it eventually works its way into prices at retail.   Amazon is currently offering to sell a Radeon HD 2900 XT for $382, even though newer ATI cards that give far better performance are much cheaper.

    Look at how it worked in the previous generation of cards:  ATI didn't have anything to compete with the GeForce GTX 285, but they did for the rest of Nvidia's single-GPU lineup (excluding the GeForce GTX 280, which is basically the same performance as the 285).  And at every other price point, ATI's card was cheaper to build, so they could charge less for the same performance.  A Radeon HD 4850 is cheaper than a GeForce GTS 250.  A Radeon HD 4870 is cheaper than a GeForce GTX 260.  A Radeon HD 4890 is cheaper than a GeForce GTS 275.  Nvidia can't cut prices and still make a profit, so they have to concede the performance per dollar comparison.

    http://www.tomshardware.com/reviews/best-graphics-card,2404.html

    Nine recommendations for ATI cards as "best value for the money" at various price points, and one for an Nvidia card.  And that's before the Radeon HD 5000 series hit.  That's not just some AMD fan site, either; go back in their archives several months and there are about as many recommendations for each company.  What happened?  AMD could and did cut prices, and Nvidia couldn't cut them as far.

    Now, best performance for the money only matters to people who care about money, and not purely best performance, period.  AMD has chosen to basically concede that market to Nvidia, unless Nvidia's cards end up being a complete disaster, as hardly anyone is willing to pay $600 for a gaming card.  Well, for the moment, Nvidia's high end is something of a disaster, but that's only because it's late, so it probably won't last more than several months.  We'll see if it turns into a problem of being both late and slow, as the Radeon HD 2900 XT did.

Sign In or Register to comment.