Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!

nvidia cannot handle 40nm

frozenvoidfrozenvoid Member Posts: 40

www.semiaccurate.com/2009/09/15/nvidia-gt300-yeilds-under-2/



How many worked out of the (4 x 104) 416 candidates? Try 7. Yes, Northwood was hopelessly optimistic - Nvidia got only 7 chips back. Let me repeat that, out of 416 tries, it got 7 'good' chips back from the fab.

 

its over nvidia is finished!

image

Comments

  • CleffyCleffy Member RarePosts: 6,412

    I remember something similiar happening 3 years ago with ATI.

  • drbaltazardrbaltazar Member UncommonPosts: 7,856

     mm ,nvidia isnt backed by big corp like the other two so for them its a bigger job then say ati

    witch now is backed by amd 

    and who amd turn for their tecnhical genius 

    yep ibm ,who sony turn for their ps3 ibm

    intel they re so big they have their own facility to test those stuff 

    and now with the 34 nm well underway 

    it just give intel the room they needed to find or even develope their techno for the graphic market

    yes for the the next season nvidia is out of the top speed race but they make plenty of money ,with this season techno

    they ll just be delayed a bit

    but that delay on the long run might prove fatal to ati and nvidia

    intel isnt sleeping on their laurel,what is intel big plan ?

    probably on the web !one thing is sure by the time next gen nvidia card is ready 

    intel will be very close to ready also,kind of scary 

  • CleffyCleffy Member RarePosts: 6,412

    I still don't think Intel is anything to worry about in the graphics market.  Their graphics thus far have been a joke, and the speculations on the larrabee put it more in line with a parallel computing part then a graphics chip.  Intel just has no experience doing anything to the massive calculation scale as ATI, AMD, or nVidia.  Despite Intel having the processor lead since 2006, nearly every single super computer is an AMD.  It speaks volumes to how well Intel can make parallel processing work

  • noquarternoquarter Member Posts: 1,170


    Originally posted by frozenvoid
    www.semiaccurate.com/2009/09/15/nvidia-gt300-yeilds-under-2/


    How many worked out of the (4 x 104) 416 candidates? Try 7. Yes, Northwood was hopelessly optimistic - Nvidia got only 7 chips back. Let me repeat that, out of 416 tries, it got 7 'good' chips back from the fab.
     
    its over nvidia is finished!



    This is why there's no way 300 parts are happening in Nov, and it's not even cuz of the low yield. When ATI got their first silicon back all the reports said it takes a minimum of 3 months from first silicon to first product if everything goes perfectly, no or few bugs in the design are found and the process goes well. More likely 4 months, and that's not even counting the actual building up of stock.


    If nVidia just got back their first chips then we're at least 3-4 months away, and since this is a brand new design with features they've never worked with before (unlike ATI who has had tessellation forever) there's bound to be lots of bugs pushing that back at least another month or 2, and now add in the fact their yields are terrible.. it's going to be at least around March that we see these chips.

  • QuizzicalQuizzical Member LegendaryPosts: 25,351

    If the Radeon 2000 series didn't kill ATI, then even the GT 300 barely beating the Radeon 6000 series to market won't kill Nvidia.

    It's not a question of whether they can handle 40 nm or not; it's a question of when.  If it's not soon (and the situation is looking worse for Nvidia with every day that passes), then ATI basically gets to mint money for a while with no real competition from Nvidia.  That, of course, may be exactly what AMD needs to survive itself, coming off of 11 consecutive quarters of losing money--and with no real prospects for improvement of the desktop processor situation until Bulldozer in 2011.

    The problem for Nvidia isn't merely that they have no single GPU to compete with Cypress.  It's that their huge die sizes prevent them from being competitive at the lower end, as well, as they simply can't slash prices to compete with ATI unless they're willing to lose money on every card they sell.  If Juniper (rumored to be branded as Radeon HD 5770) is competitive with the GeForce GTX 275 at 1/3 of the die size (and hence about 1/3 of the production cost), then ATI can price it a hair below what it costs Nvidia to make a GTX 275, and ATI makes a tidy profit at that price while Nvidia loses money on every card they sell.  Nvidia has the same problem at the lower end if their cards have performance comparable to Redwood and Cedar but with much larger die sizes, and the only place they can be competitive on price without losing money with each additional sale is at the very low end with cards that might as well be integrated graphics.

    And let's not forget that making money on each card you sell isn't enough.  There's a ton of development costs that have to be paid for, too.  Nvidia's only real hope to make progress on that count before GT 300 arrives is:

    1)  ATI tries to charge enough for their mid range and low end cards to allow Nvidia to undercut them on price,

    2)  Evergreen isn't nearly as good as one would think from the specs, and the demonstration of Cypress was fraudulent,

    3)  there are more Nvidiots out there who will would sooner buy a worse card from Nvidia than a better one from ATI than we think, or

    4)  either CUDA or PhysX suddenly get a killer app out of nowhere that makes people suddenly want them.

    None of those look likely.  Consider that Nvidia can't or won't even match ATI's prices on Radeon HD 4000 series cards of comparable performance at the same node and with a much smaller die size discrepancy.

    In slightly brighter news for Nvidia, there seems to be some glitch with GT 200 cards paired with Intel Core i5/i7 processors that prevents them from getting the performance that they should.  If they can fix that, they can get a quick performance boost essentially for free.  Most reviews comparing video cards use Intel processors because they're the fastest at the high end (so that a processor bottleneck doesn't make different cards perform essentially the same), they're the most common ones bought by enthusiasts, and there's the potential for abuse with AMD processors and chipsets intentionally making an AMD video card work better than an Nvidia one.  See the last two charts on this page:

    http://www.anandtech.com/mb/showdoc.aspx?i=3639&p=3

  • dfandfan Member Posts: 362

    Are you sure that's a bug? I mean, even previous amd architectures have shown similar results on high AA settings. It is something related to HyperTransport operation.

  • CleffyCleffy Member RarePosts: 6,412

    On the front of nVidia cards having less performance on Intel Chipsets, thats entirely Intel's fault.  Intel chipsets never performed well with nVidia parts.  Unlike AMD, nVidia doesn't have a cross company patent agreement enforced by IBM.  So Intel would have to get details directly from nVidia.  Considering AMD allows its OEMs to install any proprietary chipset on their boards, its just an advantage for AMD.

  • QuizzicalQuizzical Member LegendaryPosts: 25,351

    Well then, how do you explain Nvidia pushing for pairing their video cards with Core i5 processors, where they don't work as well as with a Phenom II processor?

    http://anandtech.com/mb/showdoc.aspx?i=3623&p=2 

    Maybe they're going to produce some benchmarks showing that an Intel processor with an Nvidia video card performs better than an AMD processor and an ATI video card in tasks that are processor-bound?

  • drbaltazardrbaltazar Member UncommonPosts: 7,856
    Originally posted by Cleffy


    I still don't think Intel is anything to worry about in the graphics market.  Their graphics thus far have been a joke, and the speculations on the larrabee put it more in line with a parallel computing part then a graphics chip.  Intel just has no experience doing anything to the massive calculation scale as ATI, AMD, or nVidia.  Despite Intel having the processor lead since 2006, nearly every single super computer is an AMD.  It speaks volumes to how well Intel can make parallel processing work

    lol intel is trying to integrate on their motherboard the power of ati or nvidia ship

    lot more work then doing a seperate card ,intel want to do away with all those add-on card 

    and they are very close to their first powerfull enough to actually be called a gaming version

    kind of cool for intel to be ignored 

     

  • dfandfan Member Posts: 362

    The situation with multiple cards isn't that simple as 1+1 imo.

     

    Marketing SLI with P55 chipset doesn't make much sense to me, they have only single 16x 2.0 pcie lane, which is splitted when using multiple cards. Although with i5 the lane is directly in contact with cpu, reducing latencies at least a little.

     

    I brought this on since I remember seeing same effect before. No idea what is the truth but I don't believe in artificial bottlenecking,

  • jaysinsjaysins Member UncommonPosts: 107

    I'm not familiar with that site but the article does seem rather opinionated and I'm not sure if I trust it, seems rather dubious. I do suspect that Nvidia is having problems as this kind of silence from the is extremely uncharacteristic. I think likely scenario is that the 300 gets pushed back a few months and it gives ATI a really nice quarter or two. When the 300 does launch it will likely be more powerful than the 5000 series but I doubt will be able to compete head to head in price/performance which has  of late been ATI's game plan. In a generation or two Nvidia will have had time to counter this smaller die better cost plan ATI has executed so well and than things will really get interesting. I hope ATI can pull out some nice quarters as they are in need of it. Plus I have a crossfire motherboard and a couple new cards in there sounds pretty nice to me. 

  • peteski123peteski123 Member UncommonPosts: 447

    Just curious,

    Exactly what games need these mentaly fast cards atm? bar the very few games that you can play on 3 monitors, Who cares that one can go faster than then other atm?  surely if they play games at full speed isnt that good enough?

  • XasapisXasapis Member RarePosts: 6,337

    Until the first DirectX 11 games start appearing, these cards will be basically useless, for people that don't need an upgrade that is. nVidia still has time to catch up, unless developers start mass producing games that take advantage of the DX11 features.

  • AmazingAveryAmazingAvery Age of Conan AdvocateMember UncommonPosts: 7,188

    TSMC did say 40nm yields are up 60% in July from previous 30%, plus Nvidia CEO has a trip out there soon says  digitimes

    "Huang is also expected to check on TSMC's 40nm process, for which TSMC has claimed yield rates have already risen from 30%, to 60% in July, and defect density will drop to only 0.2 in October. Huang will also check the progress of its upcoming GT300 GPU which is expected to launch in December"

    Up to Jensen really when to launch I am still thinking we will see something before end of year.

    EDIT: I just noticed the article was wrote by Charlie Demerjian.... says it all eh!



  • AmazingAveryAmazingAvery Age of Conan AdvocateMember UncommonPosts: 7,188

    Oh just as I type the above look what comes out:

    www.fudzilla.com/content/view/15689/1/

    "After a lot of rumours about bad yields with GT300, Nvidia has decided to talk to us and give us a simple quote about the state of its next generation 40nm product line.

    Senior manager from Nvidia has told Fudzilla that “Our (Nvidia’s) 40nm yields are fine. The rumors you are hearing are baseless.”

    This comes after some stories that GT300 yields are really bad and that next generation Nvidia is in really bad shape. According to AMD’s competitive analysis team, only nine chips should work per wafer, but they got this number due to a bad translation and we can definitely dismiss this number as completely inaccurate information.

    As we've said before, the chip we call GT300 is a completely new design and it simply could not come out earlier. Nvidia certainly doesn’t like the fact that ATI is first with new technology again, but it comforts them that they can make its DirectX 11 faster.

    You can expect a lot of architectural changes - the chip is completely redesigned and much faster than the GT200 generation. We would not be surprised if Nvidia starts talking about the chip ahead of official announcement as it currently hurts from positive Radeon 5870 reviews around the globe"

     

    GT300 taped Out 2009 launch if all goes smooth

     

     



    "September is the month when 40nm next generation architecture, something we call GT300 should get taped out. We can confirm that this has already happened and that some VIP analysts and investors have already laid eyes on this DirectX 11 card.

    If you do the math, it takes at least six weeks after tape out to release the card and get it ready for full scale production and if the tape out occurred on the first days of September, the full production can start in the middle of October. It is always at least six weeks.

    Nvidia was always aiming for a launch close to Black Friday, the first Friday after Thanksgiving and this year it is on November 27th. It is important to launch the product before this date as most of the shopping is usually done around this date.

    If something goes wrong with Nvidia's schedule, they can easily start shipping in December time, but the worst case scenario is to show the card in the next few weeks just to keep the fanboys comfortable with waiting for the next Nvidia's DirectX 11 offering and ship it as soon as possible.

    You might remember they did this with Geforce GTX 295 last year, and it did work for them"

     



  • QuizzicalQuizzical Member LegendaryPosts: 25,351

    ATI can make cards on TSMC's 40 nm process just fine, so the problem isn't TSMC.

    If Nvidia has working hardware, then why aren't they showing it off, like AMD did with DirectX 11 hardware in the Spring?  Surely that would be a lot more convincing to people who may wait to get a GT 300 than the smoke and mirrors they're offering now.  And if they don't have working hardware, then what reason is there to believe that the very next thing they try will work flawlessly and not need any redesign that delays it still further?

    The Radeon X1800 took 10 months from tapeout to release.  The Radeon HD 2900 took 11 months from tapeout to release.

    http://www.xbitlabs.com/news/video/display/20060612121554.html

    (old story, but a citation for the above claim)

    Those are longer times than normal, but chip taped out doesn't mean release is imminent.  Even if GT300 did tape out in September, waiting one production cycle to get the cards back and see that they work, and then another cycle to actually make cards in bulk for launch means you're looking at a paper launch in December if things go flawlessly for Nvidia.  Trying to piece together DirectX 11, GDDR5, and various other features brand new to Nvidia on a new 40 nm process with a brand new architecture and an enormous die size is not an ideal recipe for things to go flawlessly.  Intel is the best in the world at getting the manufacturing processes for high performance chips to work right, and they're not even willing to try a new architecture and a new node at the same time, let alone with an enormous die size.

  • QuizzicalQuizzical Member LegendaryPosts: 25,351

    Apparently we'll get a better idea of whether Nvidia can do 40 nm in a few weeks.

    http://www.fudzilla.com/content/view/15698/1/

    Theoretically, that should be kind of like the Radeon HD 4770 except slower and 6 months later.  But it is 40 nm, which is progress.

     

Sign In or Register to comment.