Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!

Nvidia announces the new Titan X, A Powerful $1,200 GPU

2»

Comments

  • dmm02dmm02 Member UncommonPosts: 40
    $1200 holy s***.... If I recall correctly I remember paying 500-600 for the Nvidia 5800 when it was first released... Didn't realize inflation has increased that much in such a short time!
  • DullahanDullahan Member EpicPosts: 4,536
    Ridelynn said:
    The (only) card will be available on August 2nd in the US and Europe for $1,200, with an Asia release forthcoming.

    I wonder if they will have wide availability like they have with the 1080.

    Also, only FE versions. I guess that is typical for the Titans though.

    I'll stop by Best Buy and grab one.


  • simsalabim77simsalabim77 Member RarePosts: 1,607
    I just don't understand the hardware "hobby" at all. $1200 for a card you stick in a computer to play video games on. I mean, I like video games a lot, but not enough to drop $1200 to get better FPS and prettier graphics. 
  • RidelynnRidelynn Member EpicPosts: 7,383
    I just don't understand the hardware "hobby" at all. $1200 for a card you stick in a computer to play video games on. I mean, I like video games a lot, but not enough to drop $1200 to get better FPS and prettier graphics. 
    All depends on your disposable income and priorities. I've heard of people blowing way more than this on their car, or home theatre, or video collection, or model train, or going out to eat on an annual basis.

    Maybe you don't like video games, or computer hardware, that much - but odds are your spending at least that amount on something you do like that someone else would say "WTF what a waste".
  • QuizzicalQuizzical Member LegendaryPosts: 25,348
    Ridelynn said:
    I agree that something doesn't quite pass the smell test.

    This doesn't follow the pattern (or as some people like to call them, statistics). It seems a very high cost effort to introduce a card that may sell in the single-digit thousands. Of course, no one is sharing actual sales numbers, but given that no Titan model at all even pops up on the Steam Hardware survey, it isn't that big.

    But, the x80Ti models do show up - the 980Ti is at around 1%, and the 780Ti still registers at 0.25%

    My guess - HBM2 is not looking promising, and/or GP100 is just very borked (which is what Quiz suspects), and/or (and this one is a long shot) yields/production/whatever of GP104 are just so bad they needed to introduce these now to take the pressure off that inventory. - those are my guesses. It takes longer than a few months to spin up a new die, so for the GP102 to be out on a card (even a card that will be, by all indications, extremely low volume), this had to start months ago.

    Any which way, it means that a GP100 based Titan/1080Ti wasn't going to work out in time to make this generation, or (and I doubt this) this was nVidia's master plan all along. Now, Titan's don't move a lot of cards, but the high end TI does move some, and if they share the same die, then you don't get one without the other. And if you aren't quite sure how your yields are going to be yet - release the most constraining part first at a prohibitive cost, and see what happens...

    There's a large and important difference between "HBM2 is problematic" and "Nvidia's implementation of HBM2 is problematic".  In 2010, for example, Nvidia's efforts at GDDR5 were a mess, while AMD's implementation of it did everything you'd hope for.  This was largely because AMD started on GDDR5 well before Nvidia, and had five separate chips that used GDDR5 by the end of 2009.  AMD's first GDDR5 controller was pretty broken, too, by that was long in the past by the time Nvidia started on it.

    There's nothing illegal about seeing that a chip is broken and redoing it.  Nvidia did that with pretty much the entire Fermi generation.  They may not have intended for the GeForce 500 series to exist at all, other than perhaps as rebrands of the 400 series.  But the 400 series was so broken that they went and redid all of the chips to fix them.

    Having to redo chips does cause development cost and time to market problems, however.  The most recent AMD GPU chip that strikes me as being that way is RV790, which was basically a second effort at RV770 that clocked higher.  The RV770 chip was nice in its day, but as the first attempt at making a GDDR5 controller for a commercial chip, AMD surely screwed up some things.  Having a chance to iterate a few times is why everything was good on the GDDR5 front for AMD by 2010.  It wouldn't be terribly surprising if something analogous plays out with HBM2:  GP100 is Nvidia's first attempt at it, while AMD already has a lot of experience with HBM.
  • QuizzicalQuizzical Member LegendaryPosts: 25,348
    dmm02 said:
    $1200 holy s***.... If I recall correctly I remember paying 500-600 for the Nvidia 5800 when it was first released... Didn't realize inflation has increased that much in such a short time!
    It's not so much inflation as that when you have a market all to yourself, you charge what you think you can get people to pay.
  • GladDogGladDog Member RarePosts: 1,097
    I just don't understand the hardware "hobby" at all. $1200 for a card you stick in a computer to play video games on. I mean, I like video games a lot, but not enough to drop $1200 to get better FPS and prettier graphics. 
    There is no guarantee that you will get better FPS and prettier graphics.  I play ESO, which is a nice looking game.  I have graphics on max at 1080p, and I still get over 50 FPS everywhere but Cyrodil.  And I don't have a Titan II.  I have a 4 year old mid range card, a Radeon 7870 2GB.  I paid $229 for it and got a $15 rebate, so essentially $214.  For ESO more card would be a waste. 

    Newer games are changing, and now I'll need to upgrade.  My 7870 is not quite enough for Star Citizen (probably will be fine in a few months when they optimize the build), which had sluggish framerates.  But all I really need for that is an RX-480 or a GTX1060.  both are in the low to mid 200s, and they both look like they will be 4 year cards.

    I am going to wait until AMD releases their high end cards, the RX-490 and the Fury II.  That should push prices down even more.


    The world is going to the dogs, which is just how I planned it!


  • HellidolHellidol Member UncommonPosts: 476
    So i had a r9 295x2 and bought it for just under 1500.00USD. I can tell you it just not worth it. I was thinking of buying this card but i changed my mind. I instead bought a ASUS strix 1080gtx that I plan on giving some extra cooling and going to OC the hell out of it ( hoping for 3.0 on the core).

    image
  • HrimnirHrimnir Member RarePosts: 2,415
    Quizzical said:
    Hrimnir said:
    Seriously quiz, all of that?  You need to take the tinfoil hat off man.  I think it's painfully obvious that this is just a full size die version of the same basic chip as the 1080.  It's basically the Titan Black /980Ti version of pascal.

    GTX 1080 is a roughly 300mm2 chip, this is probably roughly 600mm2 with the exact same overall architecture as GP104
    Well then, what is the Tesla P100?  Nvidia says the new Titan X is has a very different number of transistors, so it can't be the same chip.  And that's something that neither Nvidia nor AMD has done in the past.

    The Tesla C870 used the same chip as the GeForce 8800 GTX.
    The Tesla C1060 used the same chip as the GeForce GTX 280.
    The Tesla M2070 used the same chip as the GeForce GTX 480.
    The Tesla M2090 used the same chip as the GeForce GTX 580.
    The Tesla K10 used the same chip as the GeForce GTX 680.
    The Tesla K40 used the same chip as the GeForce GTX 780 Ti.
    The Tesla M6 used the same chip as the GeForce GTX 980.
    The Tesla M40 used the same chip as the GeForce GTX Titan X.
    The FireStream 9170 used the same chip as the Radeon HD 3870.
    The FireStream 9270 used the same chip as the Radeon HD 4870.
    The FireStream 9370 used the same chip as the Radeon HD 5870.
    The FirePro S9050 used the same chip as the Radeon HD 7970.
    The FirePro S9150 used the same chip as the Radeon R9 290X.
    The FirePro S9300 X2 used the same chip as the Radeon R9 Fury X.

    There is one oddball exception, for the Tesla K80, which used a slightly different chip from the Tesla K40, which reduced the number of compute units, reduced the clock speed, doubled the register file size per compute unit, stuck two of them on a board--and didn't launch until Nvidia had otherwise moved on to Maxwell.

    But here, it looks like Nvidia is making a high end graphics chip that won't be used for compute, and a separate high end compute chip that won't be used for graphics, and at about the same time.  Unlike with the Tesla K80, in this case, the huge compute chip and the slightly less huge graphics chip are of very different architectures.

    Now, there's nothing unethical or illegal about doing this.  It's kind of like how Intel could create a completely custom die for the Xeon E3 rather than just reusing one from laptop and desktop quad cores.  Or they could create a completely new chip for their desktop-E series rather than just using one that is also there for Xeon E5.  But they don't, as it's too expensive to design and manufacture for too little to gain.

    Nvidia apparently decided that they would eat that cost and make separate chips.  Again, there's nothing illegal or unethical about this, but in the past, they've always shied away from it due to the cost.
    It's very likely similar to the last titan in so much as its basically a 600mm2 chip without all of the double precision bits and without the HBM2.  

    They've done it before, this will be no different, it's honestly very likely the EXACT same idea as the 980 was to the 980ti/titan z.  Basically same architecture as the 1080, just expanded out to full die size.

    Nvidia made a bet that they could get crossover sales with the Titan Black that had all the double precision bits, essentially it would be a tesla on the cheap, or people who were maybe not being backed by corporate funding but needed it for CUDA calculations, etc, could pick it up and still have a good gaming card that could pull double duty as a double precision compute card.

    However that seemed to have failed pretty miserably, so with the Titan Z and 980ti they just took the standard chip architecture and blew it up to around 600mm2.

    This honestly appears to be the same idea.  I think people are seriously reading into this more than they should be.  Its likely just a "big" GP104.

    "The surest way to corrupt a youth is to instruct him to hold in higher esteem those who think alike than those who think differently."

    - Friedrich Nietzsche

  • MalaboogaMalabooga Member UncommonPosts: 2,977
    edited July 2016
    Hellidol said:
    So i had a r9 295x2 and bought it for just under 1500.00USD. I can tell you it just not worth it. I was thinking of buying this card but i changed my mind. I instead bought a ASUS strix 1080gtx that I plan on giving some extra cooling and going to OC the hell out of it ( hoping for 3.0 on the core).
    295x2 was a tremedous value 3 years ago, only card that surpass it (when CF works) is OCed GTX1080 and still fastest card in the world AMD Pro Duo.

    But look at the previous Titans, cards that were surpassed mere months after release (previous Titan X was surpassed by 980ti 3 months after release)

    And people have burned GTX1080s trying to go past 2.2 GHz so 3.0....well good luck since they only managed 2,6 on LN2 lol (9800ti still holds all the world records)

    NVidia has hit the wall on 16nm as far as MHz goes, but that happens when youre chasing MHz.
  • HellscreamHellscream Member UncommonPosts: 98
    still only 12gb ram i was hoping to see something stupid like 16gb or something lol even 12gb is overkill for todays games
  • VrikaVrika Member LegendaryPosts: 7,888
    Malabooga said:
    Hellidol said:
    So i had a r9 295x2 and bought it for just under 1500.00USD. I can tell you it just not worth it. I was thinking of buying this card but i changed my mind. I instead bought a ASUS strix 1080gtx that I plan on giving some extra cooling and going to OC the hell out of it ( hoping for 3.0 on the core).
    295x2 was a tremedous value 3 years ago, only card that surpass it (when CF works) is OCed GTX1080 and still fastest card in the world AMD Pro Duo.

    295x2 launched in April 2014. That's only 2 years and 3 months ago.
     
  • RidelynnRidelynn Member EpicPosts: 7,383
    Ridelynn said:
    Gorwe said:
    Really, what's the point of these...?

    You can just get 2x r9 390 or r9 480 and be WAY more than set up to play most games on 60 fps 1080p. And that would cost you around $600 tops.

    ...who's stupid here?

    (oh right, I forgot, you need these to play with 16x SuperSampling AA on an UHD TV...smh-economics should REALLY become a mandatoy subject in schools!)
    This isn't really meant for you to go out and buy.
    ...
    2) It's also for the investors. They see things like this, the nVidia stock price will move (which direction is anyone's guess, but nVidia would obviously like it to be up).
    ...
    http://www.fool.com/investing/2016/07/27/heres-why-nvidias-new-titan-x-card-matters-for-inv.aspx
  • [Deleted User][Deleted User] Posts: 12,263
    The user and all related content has been deleted.

    거북이는 목을 내밀 때 안 움직입니다












  • SlyLoKSlyLoK Member RarePosts: 2,698
    Other than to brag not sure why anyone would pay this much for a graphics card. Paying ~6x more for a card that is only 2x to 3x better than todays new mid range options isnt a great choice.
  • [Deleted User][Deleted User] Posts: 12,263
    The user and all related content has been deleted.

    거북이는 목을 내밀 때 안 움직입니다












  • RidelynnRidelynn Member EpicPosts: 7,383
    edited July 2016
    Titan was always marketed as the "prosumer" card - something in-between a Tesla and a GeForce. It's even named after a Tesla-based supercomputer built at Oak Ridge (which was the fastest in the world at the time it was built, and the first computer to exceed 10 petaFLOPs).

    This was aimed at people who wanted to do supercomputer tasks and research, but didn't have access to or couldn't quite afford the supercomputer (the Titan supercomputer cost almost $100M). And it happened to be able to run games as well, if you wanted to...

     There is a good case to be made that Titan is being developed and sold as a compute card that was built around a gaming architecture to bring costs down. And since it happened to be built up from a gaming architecture, why not leave the video outputs on the card and see if you can get a few suckers enthusiasts to buy it as well. But marketing for research equipment isn't sexy; slapping some gaming graphics on it makes it more appealing, especially when it's got top-of-class performance that gets it free press coverage. So the marketing makes it look more like a gaming card.

    Since, Titan (the card, not the super computer) has dropped the GeForce brand all together, which further distinguishes it from the gaming market. 
  • SlyLoKSlyLoK Member RarePosts: 2,698
    SlyLoK said:
    Other than to brag not sure why anyone would pay this much for a graphics card. Paying ~6x more for a card that is only 2x to 3x better than todays new mid range options isnt a great choice.
    Machine Learning Researchers can use these level of cards for their work. Yes gamers are mentioned but this appeals to the professional market very much.
    There are aleady different cards that serve that market. This is clearly a gaming card with the attached marketing to try to overlap another segment that is already covered to try to make the price not seem so high. They failed obviously.
  • RidelynnRidelynn Member EpicPosts: 7,383
    SlyLoK said:
    SlyLoK said:
    Other than to brag not sure why anyone would pay this much for a graphics card. Paying ~6x more for a card that is only 2x to 3x better than todays new mid range options isnt a great choice.
    Machine Learning Researchers can use these level of cards for their work. Yes gamers are mentioned but this appeals to the professional market very much.
    There are aleady different cards that serve that market. This is clearly a gaming card with the attached marketing to try to overlap another segment that is already covered to try to make the price not seem so high. They failed obviously.
    Well, Tesla cards that are clearly build for Machine Learning and Researchers start at around $5k per card. The GP100 is estimated to run around $12,500 per card. They aren't cards that are designed or marketed to just drop into a normal PC and have them work. They are designed to go into racks with dozens of other cards.

    Just a hypothetical situation here...
    So if you are a grad student trying to do AI research, and you have a $50,000 grant to live on for the next 18 months... do you

    a) Spend your entire grant on a single Tesla card (and the rack system and power infrastructure to stick it in) since it's clearly built for AI research
    or
    b) Pray that you can get an internship someplace that has a mainframe, and that they give you access, and that they don't mind you running your code on off hours
    or
    c) By a Titan and stick it in the PC under your desk, and actually eat something other than ramen and hot pockets for the rest of your project. And hey, it can also play a mean game of Counterstrike when your not grinding on your thesis.

    Or maybe your a startup company working with limited capital. Or maybe ... a dozen other similar scenarios I could dream up. 

    Yes, the marketing all seems to point to Titan being a gaming card. I don't think it is ~really~ a gaming card though. So I wouldn't necessarily call it a failure, but I do agree they are not marketing it to the real target audience. R&D marketing just isn't very compelling - it would look like the dork with the pocket protector in the corner playing with his slide rule, but video game marketing makes it look like a rock star - and to investors, that makes a big difference, often times even if it doesn't translate to actual sales figures. Titan really is the niche card for people who want Tesla, but can't afford it.
  • stayontargetstayontarget Member RarePosts: 6,519
    dmm02 said:
    $1200 holy s***.... If I recall correctly I remember paying 500-600 for the Nvidia 5800 when it was first released... Didn't realize inflation has increased that much in such a short time!
    $1200 for a card and another $900-$1000 for a monitor,  the price you pay for overkill.

    Velika: City of Wheels: Among the mortal races, the humans were the only one that never built cities or great empires; a curse laid upon them by their creator, Gidd, forced them to wander as nomads for twenty centuries...

  • Jamar870Jamar870 Member UncommonPosts: 570
    I believe either on Ars Technica or Anandtech they had the announcement and it seemed it is mostly aimed at the compute market.

  • QuizzicalQuizzical Member LegendaryPosts: 25,348
    Ridelynn said:
    SlyLoK said:
    SlyLoK said:
    Other than to brag not sure why anyone would pay this much for a graphics card. Paying ~6x more for a card that is only 2x to 3x better than todays new mid range options isnt a great choice.
    Machine Learning Researchers can use these level of cards for their work. Yes gamers are mentioned but this appeals to the professional market very much.
    There are aleady different cards that serve that market. This is clearly a gaming card with the attached marketing to try to overlap another segment that is already covered to try to make the price not seem so high. They failed obviously.
    Well, Tesla cards that are clearly build for Machine Learning and Researchers start at around $5k per card. The GP100 is estimated to run around $12,500 per card. They aren't cards that are designed or marketed to just drop into a normal PC and have them work. They are designed to go into racks with dozens of other cards.

    Just a hypothetical situation here...
    So if you are a grad student trying to do AI research, and you have a $50,000 grant to live on for the next 18 months... do you

    a) Spend your entire grant on a single Tesla card (and the rack system and power infrastructure to stick it in) since it's clearly built for AI research
    or
    b) Pray that you can get an internship someplace that has a mainframe, and that they give you access, and that they don't mind you running your code on off hours
    or
    c) By a Titan and stick it in the PC under your desk, and actually eat something other than ramen and hot pockets for the rest of your project. And hey, it can also play a mean game of Counterstrike when your not grinding on your thesis.

    Or maybe your a startup company working with limited capital. Or maybe ... a dozen other similar scenarios I could dream up. 

    Yes, the marketing all seems to point to Titan being a gaming card. I don't think it is ~really~ a gaming card though. So I wouldn't necessarily call it a failure, but I do agree they are not marketing it to the real target audience. R&D marketing just isn't very compelling - it would look like the dork with the pocket protector in the corner playing with his slide rule, but video game marketing makes it look like a rock star - and to investors, that makes a big difference, often times even if it doesn't translate to actual sales figures. Titan really is the niche card for people who want Tesla, but can't afford it.
    Depending on your research needs, a $1200 card doesn't necessarily offer any advantages over a $200 card.  And if you do need a ton of computational power, you're probably better off loading up on four cheaper cards than a single $1200 card.  GP102 presumably has the usual compute stuff stripped out, so if you were hoping for the extra register space, local memory bandwidth, double precision computations, or ECC memory, you're out of luck.

    Where the really expensive cards make a lot more sense is when you have a ton of them and are concerned with density.  You can power and cool 1 kW from consumer cards with just an ordinary home systems.  Load up on thousands of them for 1 MW and you have far more of a problem, and the other hardware that you need to power and cool that now costs a fortune.  The interconnect just to let all that hardware communicate well enough to be useful can cost a fortune, too.

    If you're paying a fortune for each node of your system, getting $300 cards instead of $1200 cards doesn't necessarily save you that much.  If it cuts the performance per node in half so that you need twice as many nodes, that can easily end up costing more.  So it's more an HPC or data center thing.

    Depending on your needs, Nvidia is sometimes a non-starter for compute on the cheap.  AMD puts a lot more compute features in cards all up and down their lineup, and is pushing GPU compute for gaming.  Nvidia reserves the compute heavy stuff for the top end cards--which this generation, means GP100, so the new Titan X won't have it.  But what you need depends tremendously on what code you need to run.
Sign In or Register to comment.