Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!

Nvidia talks about Maxwell, Volta, Logan, Parker, Kayla, Grid, etc.

QuizzicalQuizzical Member LegendaryPosts: 25,347

http://www.anandtech.com/show/6842/nvidias-gpu-technology-conference-2013-keynote-live-blog

Nvidia says that Maxwell, their next generation GPU architecture, is due out in 2014.  They also introduced its successor, Volta, though they didn't predict a year.  2016 seems like a good guess.  Nvidia said that Volta would have stacked DRAM, to get a good amount of memory closer to the GPU rather than having to pass through a normal video memory bus.  Not all video memory is accessed equally often, and having some tens of MB available to store the highly-used framebuffer and depth buffer can be beneficial.  One variant of Haswell will do roughly that in a few months.  Rumors say that the next generation Xbox will do so as well.

On the Tegra side, they're promising Tegra Logan due next year, with a Kepler GPU.  That should finally mean modern graphics in Tegra, as opposed to the architecture from 2005 that they used in Tegra 4.  Presumably Kepler wasn't ready in time for Tegra 4, and Fermi was way too high power for the mobile use that Tegra is built for.  Tegra Parker should come the following year with a Denver CPU (that's Nvidia's custom ARM cores, rather than off-the-shelf like Cortex A15) and Maxwell GPU.

As for Grid, they talked about the high end Grid K2, but not Onlive-style gaming.  It sounds like the goal is to have a ton of GPU power available over a LAN for various professional graphics and GPU compute purposes.  If you're going to buy 8 Grid K2s and put them in a box, then the price of that is probably well into five figures, so shelling out for 10 Gbps Ethernet or some other such high bandwidth LAN connection doesn't seem like such a stretch.  Furthermore, it's virtualized, so you can have 16 people connected to a single box simultaneously, with each taking more or less of the GPU power as needed.

Comments

  • ragz45ragz45 Member UncommonPosts: 810
    LoL I consider myself midly tech savy, and a lot of this was over my head.  Hats off to those that understand everything said here.
  • syntax42syntax42 Member UncommonPosts: 1,378
    Originally posted by ragz45
    LoL I consider myself midly tech savy, and a lot of this was over my head.  Hats off to those that understand everything said here.

    It isn't quantum physics.  More memory accessible for important stuff = good.  Faster and more powerful GPU architecture = good.  You don't have to know how it works, as long as it does.

  • SouldrainerSouldrainer Member Posts: 1,857

    I watched part of the press conference the other day, and what I took away from it is that Nvidia intends to combine the HD of the generation that is ending now with a new RAM configuration that can deliver extremely high frame rates even when the images are in motion.  Why?  Because it will...

    #1 Greatly reduce false blurring of images in motion.

    AND

     

    #2 Allow for higher overall frame rates in all games.

     

    Keep in mind, Nvidia is not participating in the next gen console wars.  They are mostly focusing on PC Gaming, Tegra, and Grid.

    Error: 37. Signature not found. Please connect to my server for signature access.

  • RidelynnRidelynn Member EpicPosts: 7,383

    Meh.

    The growth is in mobile. And will be for the near-term future - at least until we find some new disruptive technology to exploit.

    Even high-end PC graphics cards, like Titan, are mostly just there as research vehicles and marketing tools. They don't expect to sell many Titans - heck maybe few thousand if they are fortunate. But the R&D that went into that will be the main driving technology in the next generation cards, and allow them to springboard forward finding new methods - both in making faster architectures, in making current architectures more power efficiency, and cost efficient. And the press they get from having "The Fastest" is good too - a lot of people are nVidia diehards specifically because of marketing tools like this (as much as they may just say it's because of drivers or CUDA or PhysX or whatever other reason).

    Even having your architecture in a console - yeah that contract is a nice one, but it's mostly marketing again. Consoles sell a lot over their lifetimes, but it's not really making you any money - those chips all get sold at near cost.

    The money is in the Billions of mobile devices sold around the world, and the Millions of low cost PCs that get put to everyday use. But you aren't going to put your accelerators in there without a lot of work - because people will just default to Intel for PCs, and Intel is awful because they didn't have to be good to sell graphics accelerators - they are the default option, they just have to exist and lean on the brand recognition of their CPUs and the whole "Do-Do-Do" blue man group advertising campaign. In mobile the hardware race is just starting to heat up - not that people are finding out there are options, so that will be the interesting race to watch for the next decade.

    Having those big expensive cards out now lets them find the tweaks to shrink and refine them, so that hopefully in a decade we can have that level of power in our handhelds running off batteries. So while the power increases are exciting to watch in the demos, that's not the really exciting part about GPU technology. The really exciting parts are watching energy efficiency and power density going up, as well as finding additional uses for SIMD-style computing. PhysX/Havok/Bullet are great examples. CUDA/OpenCL are still growing, and I think eventually we'll see a lot of applications exploit them (including some surprising ones, like Excel). New APIs are particularly exciting, like TressFX (which isn't a very deep API, but may lead to other exciting things).

    I don't think "Cloud" GPU computing is really going to be huge - yes, it will be used, and readily available - thinks like Spotify do use it to nice effect, and you can lease it now from services like Amazon EC2, but that will all be behind-the-scenes type stuff, and I don't think much of it will be terribly big. The biggest benefit of Cloud-style computing comes from having access to big data sets, and performing operations on big data - and that's why companies like Amazon and Google are huge into it. And yes, some GPU acceleration can assist with that - particularly analog processing (image processing, speech recognition, etc), but that market isn't particularly huge - not compared to the Billions of mobile devices that all want accelerated displays.

    Not to mention we really can't pack more power into our PCs without supplementing energy efficiency, we have hard and fast power, thermal, and physical limits on what we can stick in there.

  • QuizzicalQuizzical Member LegendaryPosts: 25,347

    While mobile may be high volume, it's also low cost with thin profit margins.  Everyone and his neighbor's dog can use the same ARM cores produced on the same process nodes at the same foundries, which means that if you try to charge a lot, someone else can easily undercut you.

    There are, of course, a lot of other things in an SoC besides the CPU, and that's one place where you can differentiate mobile products.  AMD and Intel also offer very different (x86) CPUs, while Qualcomm and Apple have their own custom ARM cores.  Even so, competition is tight, and if you have a chip that you want to sell for $30, while someone else will sell something about as good for $20, you're not going to get many sales.

    That's why, even though everyone agrees that there are a ton of cell phones to sell chips for, AMD has no interest in making them, and TI is actually trying to bail out of that market.

    -----

    Even in discrete video cards, for many years, the high volume parts were the low end ones that were just a step above integrated.  But you only made a few dollars on each of those, which is why ATI decided (around the time they were bought out by AMD) to focus on the $200 and $300 cards that accounted for the lion's share of video card profits.

    Or look at Apple.  They're a distant also-ran in desktops, a niche player in laptops, have only about 20% of the phone market, and while they still have about half of the tablet market, that's shrinking fast.  That doesn't stop them from making a ton of money.  If you can get the high profit margin segment of the market, you don't necessarily need the high volume, low cost segment.

    -----

    Titan doesn't need to sell millions to be profitable.  Remember that the same GK110 chip is also used in Tesla cards and will soon be used in Quadro cards.  When you can sell cards for a few thousand dollars each, tens of thousands of sales adds up to a lot of money.  Nvidia has long made a considerable fraction of its profit from the sale of low-volume Quadro cards.

  • RidelynnRidelynn Member EpicPosts: 7,383

    Low volume is low volume. High profit margin helps, but you need volume.

    Apple didn't get where it is today based on it's PC sales. There's an interesting graph here:

    http://royal.pingdom.com/2010/04/09/the-money-made-by-microsoft-apple-and-google-1985-until-today/

    You know what picked up in 2004 and really started to drive Apple? The iMac had been around for 5 years at that point, the iPod 3, the iPhone not out until 2007, and the iPad not out until 2012. It was iTunes music sales. Their music sales volume doubled between March and July of that year alone:

    http://www.apple.com/pr/products/ipodhistory/

    Even Apple needs volume - it's just not in Hardware. Google, who doesn't even make much of their own hardware, needs volume - it's just not in Hardware, it's in Advertising. Even being the #1 laptop vendor (not quite niche really: http://techcrunch.com/2012/09/12/apple-is-number-1-in-u-s-notebook-sales/), that isn't a huge portion of their revenue. Apple quietly sweeps their iTunes revenue under the rug, wanting everyone just to see their hardware sales: they completely ignore it when publicly releasing sales figures. With the iPhone and iPad out now, they are the big revenue drivers today and iTunes takes more of a back seat, but it's still significant, and outweighs their PC sales. This sales chart was snapped right after the release of the iPad, and does show sales including iTunes revenue: http://www.asymco.com/2010/08/01/apple-sales-by-product-line/

    Volume is the key - even at low margins.

    Even if nVidia makes 100% profit on a Titan, at a $1000 price point they may sell 1,000, maybe 10,000....

    Even at 100% profit margin, that's $10M... for a 1B+ revenue company, that's paying their CEO more than $25M per year (at least based on 2007 data, the most recent I could find) - I wouldn't call it significant. And we all can guess that the profit margin is something under 100%. Titan is a marketing tool - sure they can drive revenue on products based on the same die, but those aren't "Titan". And there's a difference between making a profit, and making money - you can sell one and make "profit" but that isn't going to pay all your bills.

  • miguksarammiguksaram Member UncommonPosts: 835
    Massive amounts of small profits tend to trump small amounts of massive profit in a business perspective if you are talking long haul.  Case in point....Walmart
  • QuizzicalQuizzical Member LegendaryPosts: 25,347

    I'm not saying that volume doesn't matter.  Obviously, you'd like a high profit margin at high volume; that's worked out well for Intel and Apple lately.

    I am saying, however, that jumping into a commodity market isn't automatically lucrative just because it's high volume.  Elpida recently sold an awful lot of memory chips all the way into bankruptcy.

    The problem is that if you don't have any way to differentiate your product and convince people to pay a premium price for it, then the only way you can compete is to undercut on price.  When you've got lots of competitors trying to do exactly the same thing, you don't make much money that way.  In order for Nvidia to make good money off of Tegra, they need for customers to seek Tegra in particular, not just whatever ARM chip is cheapest that day.

    It's understandable that Nvidia decided to pursue Tegra in the first place.  Their problem is that their traditional markets of discrete and integrated graphics for desktops and laptops are shrinking.  Moving integrated graphics into the same chip as the CPU means that Nvidia is locked out of integrated graphics for desktops and laptops, as they don't have an x86 license.  Ever-increasing integrated graphics performance is eating up the market for low end discrete cards, too.  Those have traditionally been the big volume segments (needed by nearly everyone who isn't a gamer--and bought by some gamers, too), and Nvidia is locked out of them now.

    Rather than being reduced to selling mid-range to high end discrete video cards only--and praying that AMD doesn't figure out how to replace that by integrated graphics, too--Nvidia had to launch into new markets.  They have with Tegra, Tesla, and more recently, Grid and now Kayla.

    Those are all problematic in their own ways, however.  It's not clear that there is much of a market for Kayla at all; how many people are clamoring for a mini ITX board that runs Android?  How much of a demand is there for GPU virtualization with Grid?  Tesla is a high profit margin part, but has remained low volume much longer than Nvidia predicted.  Even if GPU compute does eventually take off, it's far from clear that Nvidia will be the beneficiary, as Intel and AMD have solid competing products.  (Xeon Phi is basically a GPU that can't do graphics.)

    The idea of Tegra was that, if Nvidia couldn't get an x86 license, Nvidia would use ARM, which they could get a license for.  The problem with mobile is that you need low power, which is something that Nvidia has traditionally been bad at.  Runaway power consumption meant that Tegra 1 may or may not have been used in any commercial products at all, while Tegra 2 barely did any better.  Nvidia has sold a lot of Tegra 3 chips by selling them for next to nothing, but that isn't necessarily a way to make a profit.  Prospects for Tegra 4 look pretty bleak, as everyone and his neighbor's dog is using ARM Cortex A15 cores now, often with a less dated graphics architecture than the slightly modified GeForce 7000 series that Nvidia is using for Tegra 4.

    The next two generations are Nvidia's real chance at differentiating Tegra from the rest of the ARM pack.  Tegra Logan will use Kepler graphics, which should allow Nvidia to readily scale graphical performance as high as they're willing to burn power for with a good, modern architecture.  No other ARM vendor can do that.  Tegra Parker will use both Maxwell graphics and Nvidia's own custom ARM cores, rather than off-the-shelf ARM Cortex A15 or A57 or whatever.

    This is my speculation, but I've long thought that Nvidia has ambitions of having Tegra-powered laptops and nettops.  While Nvidia has never been good at low power, they've long been good at high performance.  Laptops can handle a lot more power consumption than cell phones, so Nvidia could ramp up performance to have a product to compete with AMD Kabini ultrathins or Intel's Ultrabooks.  The downside is that Nvidia can't run Windows 8, but only ARM-based operating systems such as Android or Linux.

    Indeed, Kayla could well power a nettop, though I'm not sure if Nvidia intends to use it for that or for embedded applications.  Kayla with Tegra 3 will basically be a piece of junk.  A Kayla successor running Tegra Logan or Parker could be an interesting product.  Nvidia wouldn't necessarily be meaningfully restricted on power consumption there.  Would there be much of a market for that?  Maybe not, but if there is, then Nvidia will be able to deliver a product that other ARM vendors can't.

  • RidelynnRidelynn Member EpicPosts: 7,383

    I do agree on your ways that nVidia has been trying to stay relevent, and I think your also on the right path with the Tegra vision.

    I'm thinking more along the lines of Razer Edge-type product (which isn't all together dissimilar from a laptop, just a slightly different form factor). I also wouldn't be surprised to see it placed in some traditionally embedded applications: like driving the color display on a car radio (those are getting pretty fancy), or the OSD on a SmartTV - those situations aren't necessarily "high performance", but if you can offer the capabilities at a low enough price, and those applications aren't necesarily power-constrained, the manufacturers will find ways to use them (and market the crap out of it in the process).

    Think about how much better Sync would be if it could actually understand what your saying...

  • QuizzicalQuizzical Member LegendaryPosts: 25,347

    The problem is that Nvidia's only real advantage is if something needs a ton of GPU performance.  If it's leaning heavily on CPU performance, then everyone else can use the same ARM Cortex A15 cores that Nvidia can.  Even if you need really a lot of CPU performance, would Nvidia really be better than what Calxeda and Marvell offer there?  Would they even be able to keep pace with Calxeda and Marvell?  Even GPU compute isn't necessarily an advantage for Nvidia, as Kepler is rather bad at that outside of Titan.

    The problem for Nvidia with GPU performance for games is that the games that need a ton of GPU performance tend to be for Windows, which needs x86.  If you make an Android game that assumes that you have a lot of GPU performance, then today, exactly no one can play it.  Would that change if Nvidia started selling higher performance Tegra chips?  It would certainly take a while, and it would be hard to drive sales from it.

    And, of course, even if Nvidia did find a great market for their graphics in Tegra, there's nothing to stop AMD from getting an ARM license and doing the same thing, other than that it would take AMD a few years or so to bring products to market after that decision was made.  Actually, AMD already has an ARM license, but for tablets and laptops, seems more interested in pushing x86 than offering ARM.  And why not, since AMD already has a clearly differentiated product for those markets?  Temash and Kabini will probably soon make sense for quite a few people as exactly what you'd want even if you had an infinite budget.

Sign In or Register to comment.