Quantcast

Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!

Nvidia launches a new GPU--but not for graphics

QuizzicalQuizzical Member LegendaryPosts: 22,235

http://techreport.com/news/27373/tesla-k80-packs-dual-gk210-gpus

It's built for Tesla cards, not GeForce.  Nvidia is packing two of them on a board, which means severely reduced clock speeds.

The most interesting thing to me is this:

"The GK210 also has double the register file size (512KB) and twice as much L1 cache/shared memory (128KB) per SMX as the GK110B. The additional local storage should allow the SMX to achieve more constant utilization in GPU-computing workloads."

Doubling the registers and L1 cache per SMX means this is new silicon, not just rebranding an old chip or even a respin.  While very much derivative of previous Kepler chips, this has to be a new chip entirely.

I'm not sure how useful the extra registers and cache will be, though that presumably varies wildly by workload.  If a workload is very heavily limited by register or L1 cache capacity, the new Tesla K80 could easily triple the performance of the Tesla K40--with the bulk of that improvement coming because of two GPUs instead of one.  But look at Nvidia's reference benchmarks:

http://www.tomshardware.com/news/nvidia-gk210-tesla-k80,28086.html

It's a little faster than the K40 in some benchmarks and a lot faster in others, but nowhere does it every double the K40's performance, let alone triple it.  One would think that if the extra registers or cache were essential, Nvidia could have tracked down a benchmark that would show off the benefits.

It's also interesting that this is a very different direction from what they took with Maxwell, where all of the available Maxwell-based GeForce cards have a massive 2 MB L2 cache.

-----

So what does this mean for gamers?  Well, you're definitely not going to buy that GPU chip, unless you also do GPU-compute work, or perhaps Nvidia decides to sell salvage parts of it as a GeForce GTX 780.  With only 13 SMXes, it can't be a GTX 780 Ti.

I'd interpret this as meaning that there isn't a huge Maxwell GPU chip coming soon, and it's far from guaranteed that there will ever be one.  Designing a new chip is expensive, and Nvidia wouldn't bother with yet another new Kepler chip if a huge Maxwell chip for Tesla cards were only a few months away.  Launching this now is entirely consistent with rumors that the big Maxwell chip is coming in 2016, though that's far enough away that Nvidia might want to move to some successor architecture by then, as Maxwell would be two years old.

Comments

  • TorvalTorval Member LegendaryPosts: 20,166

    So is this intended more for workstations that are used for some sort of compute or rendering work? If not what who do you think the intended audience is? Is this a completely new direction for their chip design? Why? Anyway, it's an interesting overview. Thanks.

    Fedora - A modern, free, and open source Operating System. https://getfedora.org/

    traveller, interloper, anomaly, iteration


  • RidelynnRidelynn Member EpicPosts: 7,076


    Originally posted by Torvaldr
    So is this intended more for workstations that are used for some sort of compute or rendering work? If not what who do you think the intended audience is? Is this a completely new direction for their chip design? Why? Anyway, it's an interesting overview. Thanks.

    Tesla is the brand name associated with GPU Compute products.

    This is aimed at large server farms and super computers. It has almost no relevance to gaming or workstations (in fact, if you look, the cards don't even have any output ports), and doesn't really say anything the direction of future chip design.

  • Superman0XSuperman0X Member RarePosts: 2,221
    These are for cryptocurrency... like Bitcoins.
  • QuizzicalQuizzical Member LegendaryPosts: 22,235

    Suppose that you need massive amounts of computational power to run a relatively handful of code an enormous number of times in parallel.  For this, you probably want a supercomputer of some sort.  A decade ago, you'd pack in hundreds or thousands of Xeon or Opteron processors and let them do their thing.

    If your code fits what a GPU can do--little to no branching, little register space and cache per thread, massively SIMD, etc.--then you can buy some Tesla cards and get performance that completely blows away the top of the line Xeon or Opteron processors.  And if your code doesn't fit what a GPU can do, then you don't buy these because they'll be terrible for your needs.  The classic case of code that fits what a GPU can do is graphics, of course, but there are some non-graphical applications that also fit GPUs nicely.

    Superman mentions bitcoin mining, and while it is true that GPUs can mine bitcoins massively faster than CPUs can, that's not an application of these anymore.  If you know that this particular little chunk of code is the only thing that will ever have to run on a chip, you can do much better yet by making a custom ASIC that hard-codes into silicon the exact code that you need.  Those exist for bitcoins, and they're massively faster than GPUs at bitcoin mining.

    One downside of such a custom ASIC is that it can't do anything else.  Another is that it costs millions to design the chip and pay for masks to fab it.  After that's done, if you find a bug in your code, the millions you already spent is money down the drain and you're stuck paying millions again for a new set of masks and new chips to fix the bug.  On a GPU, if you find a bug in your code, you can fix it in software and carry on.

Sign In or Register to comment.