Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!

GeForce GTX Titan kind of launches, but not really

QuizzicalQuizzical Member LegendaryPosts: 25,347

This is Nvidia's consumer GPU based on their new high end GK110 GPU.  It's the same Kepler architecture as the rest of the GeForce GTX whatever cards.  Whereas a GK104 has 8 SMXes and 4 memory channels, GK110 has 15 and 6.

The GeForce GTX Titan has to clock itself substantially lower than the GeForce GTX 680, 670, 660, etc., however.  It's also a salvage part, with one of the SMXes disabled, most likely because yields aren't that great.  Not that you'd expect excellent yields on a 551 mm^2 chip.

While Nvidia isn't allowing benchmarks to be posted yet, from the theoretical specs, you can expect it to come in at 40%-50% faster than a GeForce GTX 680.  It's advantage over a Radeon HD 7970 GHz Edition should be a bit less than that, but still substantial.  This is the new top end GPU chip, and AMD doesn't have a competing product this generation.  If you want maximum performance at any price, this is the card you want.

If you're on any semblance of a budget, however, this is not the card you're looking for.  The price tag is $1000.  It offers perhaps double the graphical performance of a GeForce GTX 660 or Radeon HD 7870, while costing more than four times as much.  That makes it terrible on a price/performance basis, but Nvidia isn't marketing it as a budget-friendly product.  If you want top end single-GPU performance, this is the only way to get it, so you pay what it costs or do without.

Nvidia promises a TDP of 250 W.  Do I believe that's an honest TDP?  In a word:  no.  The Radeon HD 7970 GHz Edition has a TDP of 250 W.  The GeForce GTX 680 has an official TDP of 190 W, but an honest TDP probably a bit higher than that.  To stay inside of 250 W, Nvidia can get huge efficiency gains over both of those on the same process node.  Do I believe that they can do that even while burning far more power to have far more very high clocked GDDR5 memory, binning out the least efficient chips because the good ones go into Tesla and Quadro cards that cost several times as much, and burning a fair bit of power for GPGPU/enterprise bloat (which makes Tahiti substantially less efficient than AMD's other 28 nm GPUs)?  Definitely not.  So I'd assume an honest TDP in the ballpark of 300 W.

Nvidia expects card to be available for sale next week.

Comments

  • Sal1Sal1 Member UncommonPosts: 430
    250 watts or higher just for the graphics card? Wow. Pretty soon we will need a seperate power supply just for these babys. lol
  • RidelynnRidelynn Member EpicPosts: 7,383

    We've been at 250W for a few generations now for the top tier cards. And many cards have eeked over that even (590GTX, AMD5990, etc.)

    AMD recently came out with technology to be able to cap the power draw of a card (PowerTune), which can set a power level and throttle the card to maintain it as required. nVidia hasn't really come up with anything similar yet, other than driver profiles to artificially cap certain known power-hog processes (which is what AMD did prior to PowerTune), so nVidia cards can go over their power caps if a process drives them hard enough.

  • CleffyCleffy Member RarePosts: 6,412

    This thing might be too big to be practical.  If its clocked too low than it will suck.  Clock matters.  On the HD3870, it ran so cold that I was able to stabily clock it to 950 mhz with GDDR4 memory.  This made it perform far better than it benchmarked at.  If it clocks higher than heat and power draw will be an issue due to the die size.

    I think there will still be a market for this card.  People still bought the GTX280 purely because it was the best performing part or that it was a nVidia despite all the other disadvantages it had over the HD4870.

  • MyrdynnMyrdynn Member RarePosts: 2,479

    a $1000 graphics card made from salvaged parts that doesnt run even all of its components, claiming its yield isnt that great.

    no thanks, this doesnt sound appealing at all to me

     

  • TibbzTibbz Member UncommonPosts: 613
    Originally posted by Myrdynn

    a $1000 graphics card made from salvaged parts that doesnt run even all of its components, claiming its yield isnt that great.

    no thanks, this doesnt sound appealing at all to me

     

    rofl no kidding.  Though i would never spend $1000 on a GPU (or cpu) unless it rendered a virtual 3d, anatomically correct holograph (with collision detection)…  image

    image
  • RidelynnRidelynn Member EpicPosts: 7,383

    ~Most~ high end silicon chips are salvaged and don't run all their parts, and/or run at reduced frequencies. That part isn't abnormal in the industry.

    The only difference between an Intel Core i5 3570/3550/3470/3450/3300 is the clock speed it's sold (and locked) at (3.4/3.3/3.2/3.1/3.0 respectively). They are all from the same wafer and same design, it's just a matter of how well the perform coming off the assembly line: the better ones get sold at higher stock clocks, the salvage parts still work, just not as optimally, so they get underclocked a bit and still work fine.

    Just like the 670GTX and 680GTX are from the same wafer, the 670GTX just has a single SMX disabled and runs at a slightly lower clock speed. It's not like the 670's are defective, it's just that all the SMX's didn't come out of the Fab testing correctly, so the ones that didn't work were fused off and disabled, and the rest of the chip works just fine.

  • QuizzicalQuizzical Member LegendaryPosts: 25,347
    Originally posted by Ridelynn

    ~Most~ high end silicon chips are salvaged and don't run all their parts, and/or run at reduced frequencies. That part isn't abnormal in the industry.

    The only difference between an Intel Core i5 3570/3550/3470/3450/3300 is the clock speed it's sold (and locked) at (3.4/3.3/3.2/3.1/3.0 respectively). They are all from the same wafer and same design, it's just a matter of how well the perform coming off the assembly line: the better ones get sold at higher stock clocks, the salvage parts still work, just not as optimally, so they get underclocked a bit and still work fine.

    Just like the 670GTX and 680GTX are from the same wafer, the 670GTX just has a single SMX disabled and runs at a slightly lower clock speed. It's not like the 670's are defective, it's just that all the SMX's didn't come out of the Fab testing correctly, so the ones that didn't work were fused off and disabled, and the rest of the chip works just fine.

    While it's common for high end chips to have salvage parts, what's not common is for the top bin to itself be a salvage part.  The Ivy Bridge quad cores you cite in some cases have various things disabled for reasons of market segmentation, not yields.  A Core i7-3770K, for example, takes a fully working chip and disables ECC memory support.  Core i3 dual cores disable AES-NI, not because it doesn't work, but because Intel wants you to have to buy a Core i5 version for that.

    Video cards give a cleaner example, where the GeForce GTX 670, GTX 660 Ti, and OEM version of the GTX 660 are all different salvage bins of the same GK104 die.  But Nvidia does offer a fully functional GK104 die (or two) in the GeForce GTX 680, GeForce GTX 690, GeForce GTX 680M, Quadro K5000, and Tesla K10.  What's unusual about the GK110 die is that Nvidia doesn't offer a fully functional die in any chips--not GeForce, not Quadro, not Tesla.  They did the same thing with GF100, again, due to awful yields.

  • KabaalKabaal Member UncommonPosts: 3,042
    They're just trying to goad AMD into launching the next series of cards first, they already have the proper 8*** cards ready to go.
  • Sal1Sal1 Member UncommonPosts: 430
    Originally posted by Ridelynn

    We've been at 250W for a few generations now for the top tier cards. And many cards have eeked over that even (590GTX, AMD5990, etc.)

    AMD recently came out with technology to be able to cap the power draw of a card (PowerTune), which can set a power level and throttle the card to maintain it as required. nVidia hasn't really come up with anything similar yet, other than driver profiles to artificially cap certain known power-hog processes (which is what AMD did prior to PowerTune), so nVidia cards can go over their power caps if a process drives them hard enough.

    I haven't bought any new equipment in many years so I didn't know this. I am emberrased to admit my PC's power supply is 305 watts. lol

  • QuizzicalQuizzical Member LegendaryPosts: 25,347

    Benchmarks are out now.  The GeForce GTX Titan consistently crushes the GeForce GTX 680 by about as much as expected.

    What's interesting is that it doesn't consistently crush the Radeon HD 7970 GHz Edition.  Sometimes it does, of course.  But the 7970 actually beats Titan now and then, though never by all that much.  On average, Titan is maybe 1/3 faster than a Radeon HD 7970 GHz Edition.

    One interesting takeaway from the Titan reviews is that the Radeon HD 7970 GHz Edition is now clearly a faster card than the GeForce GTX 680.  AMD launched the 7000 generation as soon as they had working cards and stable drivers, and was able to get huge performance improvements out of drivers months later that didn't make it into the initial reviews.

    Titan does mean Nvidia clearly has the fastest card, but it costs $1000.  AMD can offer you about 3/4 of the performance for $430 (Radeon HD 7970 GHz Edition), or 1/2 of the performance for $220 (Radeon HD 7870).  That makes Titan into a product only for people who absolutely have to have the very fastest and are willing to pay whatever it costs.  It's kind of like Intel's Sandy Bridge-E six core processors in that way.

    Titan is more energy efficient than AMD's Tahiti-based cards.  Apparently Titan does cap the TDP at 265 W, but generally runs a lot closer to that limit than AMD cards do to their PowerTune limits.  That means that Titan doesn't have much in the way of overclocking headroom:  you can clock it higher, but it will just throttle itself back.  Nvidia isn't allowing board partners to change this behavior, either.

    Nvidia likes to have a super high end product that people can drool over, but hardly anyone will actually buy.  Part of the hope is that people will read Titan reviews, think Nvidia must be awesome, and run out to buy a GeForce GTX 650 that they can actually afford without comparing it to a Radeon HD 7750 or 7770.  Remember the $800 GeForce 8800 Ultra?  How about the $650 GeForce GTX 280?  The only reason that the GeForce GTX 480 and GTX 580 didn't have a stratospheric price is that the Fermi architecture was a disaster, so Nvidia's enormous die wasn't able to beat AMD's much smaller die by all that much.  The GTX 280 likewise had its price slashed in a hurry when AMD launched the Radeon HD 4870 at $300, which left Nvidia's top end card with a much smaller lead than Nvidia had expected.

    Ultimately, the GeForce GTX Titan isn't going to meet the same fate as the GTX 280.  AMD doesn't have an answer for it other than to point at the price tag (though admittedly, the overwhelming majority of gamers will find that a convincing reason not to buy a Titan), as AMD has confirmed that the Radeon HD 7970 GHz Edition is their high end for this year.  With a die shrink to 20 nm around the end of the year, AMD will likely be able to edge out the Titan without needing an enormous die.  But that still leaves Titan with about a year alone at the top of the market.

    There is also the GeForce GTX 690 and unofficial Radeon HD 7990, both of which commonly beat Titan in average frame rates.  But that's SLI or CrossFire, so if you're looking to spend $1000 on video cards, you're likely better off going with one Titan than two of something else in SLI or CrossFire.  Though I could kind of understand going with two 7970 GHz Edition cards, as they are cheaper.

    Even so, my recommendation for people looking to build a fairly high end gaming system is to ignore the top end cards and look at a Radeon HD 7850, 7870, 7950, or 7970 or GeForce GTX 660 or GTX 670, instead.  The price tag matters.

  • RidelynnRidelynn Member EpicPosts: 7,383

    The most interesting part of the Titan launch- GPU Boost 2.0

    Looks like nVidia can finally cap their TDP's.

Sign In or Register to comment.