Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!

NVIDIA @ CES

2»

Comments

  • QuizzicalQuizzical Member LegendaryPosts: 25,355
    Originally posted by lizardbones

    Sometimes I think you are just trying to punch people with words. I'm going to leave all those words in there, even though anyone else who gets through them will have been punched in the brain.

    Wouldn't it make a lot more sense to have more than one player's game per GPU than to have multiple GPUs working on one player's game? That seems to be what they are attempting to do. Most games do not utilize 100% of a GPUs processing, so being able to capitalize on that would lead to a hardware and power savings, even if it doesn't lead to a huge performance improvement. I'm pretty sure their incredibly boring presentation talked about a power and cost savings, not a huge performance increase.

    Another thing is they are saying what they have doesn't exist anywhere else. Using multiple GPUs per game already exists with the SLI stuff. Multiple people utilizing a single GPU doesn't exist outside of their hardware. It's virtualized hardware for the GPU. I've not even looked at this type of thing for servers, but when running virtual machines, the virtual GPU hardware is bad. Bad, bad, bad, bad, bad. If they can work out a system where virtual GPUs are garbage, they would have a new product. Much better than an old product that is being used in a suspicious way.

    Whether or not they can do it is another question entirely. You seem to have questions about whether even one of their GPUs will give decent performance, much less multiple player's games utilizing a single GPU.

     

    Sure, multiple things running on a single GPU simultaneously makes sense.  A Radeon HD 6970 could do two things at once, and at least some GCN cards can probably do more, so AMD is heading in this direction, too.  GPU virtualization in that way makes sense for some people.

    SLI and CrossFire use multiple GPUs to render a game, but they use alternate frame rendering.  Splitting a single frame across multiple GPUs simply isn't practical.  It's technically possible, but it would bring such a large performance hit that there's no point.  Well, maybe at ultra-high resolutions, if a game engine was aware that there were two GPUs and did a bunch of custom stuff on the CPU to balance the load between them, it could kind of work as an alternative to CrossFire or SLI.  But that really can't be done purely in video drivers or tacked onto existing games without making some very fundamental changes to the game engine.

    -----

    One other thought on why they surely aren't planning on splitting a single frame among multiple GPUs:  if they were going to do that, then why not use a higher end GPU instead?  The GPUs in a Grid K1 are basically half of a GK107.  The bandwidth you'd need to make that perform comparably for a single frame might be possible, but it would be so outlandishly expensive that it would be completely stupid to do it that way instead of just using a GK107 and making the entire inter-GPU communication problem vanish.

  • DilweedDilweed Member UncommonPosts: 222

    Dudes, I've been a member of this forum for almost 10 years.

    I'm reading this topic, I'm kinda hardware savy myself, I'm halfway and I want to congratulate you guys with this thread +1

    Most informative thread (I prolly missed some) I have ever seen on this forum, keep it up, thanks :)

  • TheLizardbonesTheLizardbones Member CommonPosts: 10,910


    Originally posted by Quizzical
    Originally posted by lizardbones Sometimes I think you are just trying to punch people with words. I'm going to leave all those words in there, even though anyone else who gets through them will have been punched in the brain. Wouldn't it make a lot more sense to have more than one player's game per GPU than to have multiple GPUs working on one player's game? That seems to be what they are attempting to do. Most games do not utilize 100% of a GPUs processing, so being able to capitalize on that would lead to a hardware and power savings, even if it doesn't lead to a huge performance improvement. I'm pretty sure their incredibly boring presentation talked about a power and cost savings, not a huge performance increase. Another thing is they are saying what they have doesn't exist anywhere else. Using multiple GPUs per game already exists with the SLI stuff. Multiple people utilizing a single GPU doesn't exist outside of their hardware. It's virtualized hardware for the GPU. I've not even looked at this type of thing for servers, but when running virtual machines, the virtual GPU hardware is bad. Bad, bad, bad, bad, bad. If they can work out a system where virtual GPUs are garbage, they would have a new product. Much better than an old product that is being used in a suspicious way. Whether or not they can do it is another question entirely. You seem to have questions about whether even one of their GPUs will give decent performance, much less multiple player's games utilizing a single GPU.  
    Sure, multiple things running on a single GPU simultaneously makes sense.  A Radeon HD 6970 could do two things at once, and at least some GCN cards can probably do more, so AMD is heading in this direction, too.  GPU virtualization in that way makes sense for some people.

    SLI and CrossFire use multiple GPUs to render a game, but they use alternate frame rendering.  Splitting a single frame across multiple GPUs simply isn't practical.  It's technically possible, but it would bring such a large performance hit that there's no point.  Well, maybe at ultra-high resolutions, if a game engine was aware that there were two GPUs and did a bunch of custom stuff on the CPU to balance the load between them, it could kind of work as an alternative to CrossFire or SLI.  But that really can't be done purely in video drivers or tacked onto existing games without making some very fundamental changes to the game engine.

    -----

    One other thought on why they surely aren't planning on splitting a single frame among multiple GPUs:  if they were going to do that, then why not use a higher end GPU instead?  The GPUs in a Grid K1 are basically half of a GK107.  The bandwidth you'd need to make that perform comparably for a single frame might be possible, but it would be so outlandishly expensive that it would be completely stupid to do it that way instead of just using a GK107 and making the entire inter-GPU communication problem vanish.




    The best case scenario sounds like the system has to render a frame, and picks the next available GPU to render that frame. The frames are rendered quickly because there's always another physical GPU to render the next frame. Physical GPUs aren't assigned to any specific game or user, only the next available frame. The game, however, sees one virtual GPU and the game doesn't have to worry about it too much or be rewritten to function. They are doing the usual rendering, just very efficiently.

    I think the big deal is virtualizing the GPUs.

    If they can successfully virtualize GPUs so that the virtual GPU seen by a game isn't complete garbage, people could boot up their Linux system, and then start a virtual Windows machine to run games, and it would actually work. I'll admit, that's a big stretch since they've targeted servers, but that's one of the unintended results I'd like to see.

    I can not remember winning or losing a single debate on the internet.

  • QuizzicalQuizzical Member LegendaryPosts: 25,355
    Originally posted by lizardbones

     


    Originally posted by Quizzical

    Originally posted by lizardbones Sometimes I think you are just trying to punch people with words. I'm going to leave all those words in there, even though anyone else who gets through them will have been punched in the brain. Wouldn't it make a lot more sense to have more than one player's game per GPU than to have multiple GPUs working on one player's game? That seems to be what they are attempting to do. Most games do not utilize 100% of a GPUs processing, so being able to capitalize on that would lead to a hardware and power savings, even if it doesn't lead to a huge performance improvement. I'm pretty sure their incredibly boring presentation talked about a power and cost savings, not a huge performance increase. Another thing is they are saying what they have doesn't exist anywhere else. Using multiple GPUs per game already exists with the SLI stuff. Multiple people utilizing a single GPU doesn't exist outside of their hardware. It's virtualized hardware for the GPU. I've not even looked at this type of thing for servers, but when running virtual machines, the virtual GPU hardware is bad. Bad, bad, bad, bad, bad. If they can work out a system where virtual GPUs are garbage, they would have a new product. Much better than an old product that is being used in a suspicious way. Whether or not they can do it is another question entirely. You seem to have questions about whether even one of their GPUs will give decent performance, much less multiple player's games utilizing a single GPU.  
    Sure, multiple things running on a single GPU simultaneously makes sense.  A Radeon HD 6970 could do two things at once, and at least some GCN cards can probably do more, so AMD is heading in this direction, too.  GPU virtualization in that way makes sense for some people.

     

    SLI and CrossFire use multiple GPUs to render a game, but they use alternate frame rendering.  Splitting a single frame across multiple GPUs simply isn't practical.  It's technically possible, but it would bring such a large performance hit that there's no point.  Well, maybe at ultra-high resolutions, if a game engine was aware that there were two GPUs and did a bunch of custom stuff on the CPU to balance the load between them, it could kind of work as an alternative to CrossFire or SLI.  But that really can't be done purely in video drivers or tacked onto existing games without making some very fundamental changes to the game engine.

    -----

    One other thought on why they surely aren't planning on splitting a single frame among multiple GPUs:  if they were going to do that, then why not use a higher end GPU instead?  The GPUs in a Grid K1 are basically half of a GK107.  The bandwidth you'd need to make that perform comparably for a single frame might be possible, but it would be so outlandishly expensive that it would be completely stupid to do it that way instead of just using a GK107 and making the entire inter-GPU communication problem vanish.



    The best case scenario sounds like the system has to render a frame, and picks the next available GPU to render that frame. The frames are rendered quickly because there's always another physical GPU to render the next frame. Physical GPUs aren't assigned to any specific game or user, only the next available frame. The game, however, sees one virtual GPU and the game doesn't have to worry about it too much or be rewritten to function. They are doing the usual rendering, just very efficiently.

    I think the big deal is virtualizing the GPUs.

    If they can successfully virtualize GPUs so that the virtual GPU seen by a game isn't complete garbage, people could boot up their Linux system, and then start a virtual Windows machine to run games, and it would actually work. I'll admit, that's a big stretch since they've targeted servers, but that's one of the unintended results I'd like to see.

     

    Picking a GPU arbitrarily when you decide it's time to render a frame isn't practical, either, at least for gaming.  What do you do about the hundreds of MB of data buffered in one particular GPU's video memory that you need to render that particular frame?  If you pick a different GPU, then you have to copy all of the data over there.  Do that every frame and you might end up spending as much time copying buffered data around as you actually do rendering frames.  If you only switch GPUs once in a while, or if you're running an office application that doesn't need much data buffered, then it's more viable.

    Booting up Linux and then running a virtual Windows machine in it to play games does strike me as the sort of thing that ought to be possible sometime soonish.  Though for that, you'd just want an ordinary GeForce or Radeon card, not a Grid card.

  • botrytisbotrytis Member RarePosts: 3,363
    Originally posted by Quizzical
    Originally posted by lizardbones

     


    Originally posted by Quizzical

    Originally posted by lizardbones No, one box would give you 35 entry level gpus. So twenty boxes give you 700 customers, each with entry level gpu performance. This is according to them. ** edit ** So you have the equivalent of an entry level gpu per customer, just with far fewer physical gpus. On Live needed a physical gpu per customer.  
    One physical Nvidia Grid card has four physical GPUs in it, and they're the lowest end discrete GPU of the generation--from either major graphics vendor.  They'll probably be clocked down from GeForce cards, and might even be paired with DDR3 memory instead of GDDR5.  That's not going to get you the performance of 35 entry level GPUs at once, unless by "entry-level", you mean something ancient like GeForce G 210 or Radeon HD 4350.

    I have no idea. What they are saying is each box has 24 GPUs, which is a substantial cost improvement over having 24 discrete video cards. These are GPUs developed specifically for the GRID servers, not their regular GPUs. They've combined this with software that allows for load balancing and virtual hardware stacks(?), which means one GPU can support several users at a hardware cost reduction, and also a substantial power requirement reduction.

     

    Nvidia now says that there are two such cards.

    http://www.nvidia.com/object/grid-boards.html

    The lower end version (Grid K1) is four GK107 chips paired with DDR3 memory in a 130 W TDP.  That means it's basically four of these, except clocked a lot lower:

    http://www.newegg.com/Product/Product.aspx?Item=N82E16814130818

    That's stupidly overpriced, by the way; on a price/performance basis, you could maybe justify paying $70, but not more than a far superior Radeon HD 7750 costs.  Oh, and that's before you turn the clock speeds way down to save on power consumption.

    The higher end version (Grid K2) is two GK104 GPUs in a 225 W TDP.  That means basically two 4 GB GeForce GTX 680s, except clocked a lot lower, in order to have two of them on a card barely use more power than a single "real" GTX 680.

    Now yes, the Grid cards might make a lot of sense for a service like OnLive.  (AMD either offers or will soon offer Trinity-based Opteron chips with integrated graphics that might also make a ton of sense for something like OnLive.)  What doesn't make sense is for customers to pay for any streaming service based on the Grid K1 cards.  But OnLive was always targeted mainly at the clueless, so nothing changes there.

    They aren't custom chips.  You don't do custom chips for low-volume parts.  Nvidia doesn't even do custom chips for Quadro cards, and that's a huge cash cow.  A different bin of existing chips, yes, but that's far from doing a custom chip.  It might be a special salvage bin with something fused off that the consumer cards need.  Nvidia even explicitly says that they're Kepler GPU chips.

    These boards were used in the latest supercomputer -  http://www.engadget.com/2012/11/12/titan-supercomputer-leads-latest-top-500-list-as-newly-available/  so don't say they are not good.


  • QuizzicalQuizzical Member LegendaryPosts: 25,355
    Originally posted by botrytis
    Originally posted by Quizzical
    Originally posted by lizardbones

     


    Originally posted by Quizzical

    Originally posted by lizardbones No, one box would give you 35 entry level gpus. So twenty boxes give you 700 customers, each with entry level gpu performance. This is according to them. ** edit ** So you have the equivalent of an entry level gpu per customer, just with far fewer physical gpus. On Live needed a physical gpu per customer.  
    One physical Nvidia Grid card has four physical GPUs in it, and they're the lowest end discrete GPU of the generation--from either major graphics vendor.  They'll probably be clocked down from GeForce cards, and might even be paired with DDR3 memory instead of GDDR5.  That's not going to get you the performance of 35 entry level GPUs at once, unless by "entry-level", you mean something ancient like GeForce G 210 or Radeon HD 4350.

    I have no idea. What they are saying is each box has 24 GPUs, which is a substantial cost improvement over having 24 discrete video cards. These are GPUs developed specifically for the GRID servers, not their regular GPUs. They've combined this with software that allows for load balancing and virtual hardware stacks(?), which means one GPU can support several users at a hardware cost reduction, and also a substantial power requirement reduction.

     

    Nvidia now says that there are two such cards.

    http://www.nvidia.com/object/grid-boards.html

    The lower end version (Grid K1) is four GK107 chips paired with DDR3 memory in a 130 W TDP.  That means it's basically four of these, except clocked a lot lower:

    http://www.newegg.com/Product/Product.aspx?Item=N82E16814130818

    That's stupidly overpriced, by the way; on a price/performance basis, you could maybe justify paying $70, but not more than a far superior Radeon HD 7750 costs.  Oh, and that's before you turn the clock speeds way down to save on power consumption.

    The higher end version (Grid K2) is two GK104 GPUs in a 225 W TDP.  That means basically two 4 GB GeForce GTX 680s, except clocked a lot lower, in order to have two of them on a card barely use more power than a single "real" GTX 680.

    Now yes, the Grid cards might make a lot of sense for a service like OnLive.  (AMD either offers or will soon offer Trinity-based Opteron chips with integrated graphics that might also make a ton of sense for something like OnLive.)  What doesn't make sense is for customers to pay for any streaming service based on the Grid K1 cards.  But OnLive was always targeted mainly at the clueless, so nothing changes there.

    They aren't custom chips.  You don't do custom chips for low-volume parts.  Nvidia doesn't even do custom chips for Quadro cards, and that's a huge cash cow.  A different bin of existing chips, yes, but that's far from doing a custom chip.  It might be a special salvage bin with something fused off that the consumer cards need.  Nvidia even explicitly says that they're Kepler GPU chips.

    These boards were used in the latest supercomputer -  http://www.engadget.com/2012/11/12/titan-supercomputer-leads-latest-top-500-list-as-newly-available/  so don't say they are not good.

    Are you sure that it's using Grid cards as opposed to Tesla?  The latter is Nvidia's variant on the same GK104 GPU chip that was built with supercomputers in mind.

  • TheLizardbonesTheLizardbones Member CommonPosts: 10,910


    Originally posted by Quizzical
    Originally posted by lizardbones   Originally posted by Quizzical Originally posted by lizardbones Sometimes I think you are just trying to punch people with words. I'm going to leave all those words in there, even though anyone else who gets through them will have been punched in the brain. Wouldn't it make a lot more sense to have more than one player's game per GPU than to have multiple GPUs working on one player's game? That seems to be what they are attempting to do. Most games do not utilize 100% of a GPUs processing, so being able to capitalize on that would lead to a hardware and power savings, even if it doesn't lead to a huge performance improvement. I'm pretty sure their incredibly boring presentation talked about a power and cost savings, not a huge performance increase. Another thing is they are saying what they have doesn't exist anywhere else. Using multiple GPUs per game already exists with the SLI stuff. Multiple people utilizing a single GPU doesn't exist outside of their hardware. It's virtualized hardware for the GPU. I've not even looked at this type of thing for servers, but when running virtual machines, the virtual GPU hardware is bad. Bad, bad, bad, bad, bad. If they can work out a system where virtual GPUs are garbage, they would have a new product. Much better than an old product that is being used in a suspicious way. Whether or not they can do it is another question entirely. You seem to have questions about whether even one of their GPUs will give decent performance, much less multiple player's games utilizing a single GPU.  
    Sure, multiple things running on a single GPU simultaneously makes sense.  A Radeon HD 6970 could do two things at once, and at least some GCN cards can probably do more, so AMD is heading in this direction, too.  GPU virtualization in that way makes sense for some people.   SLI and CrossFire use multiple GPUs to render a game, but they use alternate frame rendering.  Splitting a single frame across multiple GPUs simply isn't practical.  It's technically possible, but it would bring such a large performance hit that there's no point.  Well, maybe at ultra-high resolutions, if a game engine was aware that there were two GPUs and did a bunch of custom stuff on the CPU to balance the load between them, it could kind of work as an alternative to CrossFire or SLI.  But that really can't be done purely in video drivers or tacked onto existing games without making some very fundamental changes to the game engine. ----- One other thought on why they surely aren't planning on splitting a single frame among multiple GPUs:  if they were going to do that, then why not use a higher end GPU instead?  The GPUs in a Grid K1 are basically half of a GK107.  The bandwidth you'd need to make that perform comparably for a single frame might be possible, but it would be so outlandishly expensive that it would be completely stupid to do it that way instead of just using a GK107 and making the entire inter-GPU communication problem vanish.
    The best case scenario sounds like the system has to render a frame, and picks the next available GPU to render that frame. The frames are rendered quickly because there's always another physical GPU to render the next frame. Physical GPUs aren't assigned to any specific game or user, only the next available frame. The game, however, sees one virtual GPU and the game doesn't have to worry about it too much or be rewritten to function. They are doing the usual rendering, just very efficiently. I think the big deal is virtualizing the GPUs. If they can successfully virtualize GPUs so that the virtual GPU seen by a game isn't complete garbage, people could boot up their Linux system, and then start a virtual Windows machine to run games, and it would actually work. I'll admit, that's a big stretch since they've targeted servers, but that's one of the unintended results I'd like to see.  
    Picking a GPU arbitrarily when you decide it's time to render a frame isn't practical, either, at least for gaming.  What do you do about the hundreds of MB of data buffered in one particular GPU's video memory that you need to render that particular frame?  If you pick a different GPU, then you have to copy all of the data over there.  Do that every frame and you might end up spending as much time copying buffered data around as you actually do rendering frames.  If you only switch GPUs once in a while, or if you're running an office application that doesn't need much data buffered, then it's more viable.

    Booting up Linux and then running a virtual Windows machine in it to play games does strike me as the sort of thing that ought to be possible sometime soonish.  Though for that, you'd just want an ordinary GeForce or Radeon card, not a Grid card.




    Some more Googling turns up this tidbit from Joystiq:

    According to Nvidia, a single Grid server can enable up to 24 HD quality game streams. At CES Nvidia showcased the Grid gaming system, which incorporates 20 Grid servers into a single rack. Nvidia says the rack is capable of producing 36 times the amount of HD-quality game streams as 'first-generation cloud gaming systems.'


    Since there are 24 GPUs per unit, and each unit can put out 24 HD quality game streams, it seems like each player gets a dedicated GPU. The cost savings is in not having to buy a separate, physical video card per player. It's still a GPU per player, but the system is designed from the ground up around the idea rather than using off the shelf hardware.

    Another thing to note is that their presentation showed the Shield device hooking into the GRID cloud, not just the user's PC. That might be a big deal.

    I've been waiting for virtualized graphics for a loooong time. I set my Linux machine aside so I could play video games again and really, that's the only reason I don't run it now.

    I can not remember winning or losing a single debate on the internet.

  • QuizzicalQuizzical Member LegendaryPosts: 25,355
    Originally posted by lizardbones

    According to Nvidia, a single Grid server can enable up to 24 HD quality game streams. At CES Nvidia showcased the Grid gaming system, which incorporates 20 Grid servers into a single rack. Nvidia says the rack is capable of producing 36 times the amount of HD-quality game streams as 'first-generation cloud gaming systems.'


    Since there are 24 GPUs per unit, and each unit can put out 24 HD quality game streams, it seems like each player gets a dedicated GPU. The cost savings is in not having to buy a separate, physical video card per player. It's still a GPU per player, but the system is designed from the ground up around the idea rather than using off the shelf hardware.

    Another thing to note is that their presentation showed the Shield device hooking into the GRID cloud, not just the user's PC. That might be a big deal.

     

    A cynic would point out that 36 times zero is still zero.

    A demonstration doesn't mean much in situations like this, as it doesn't let you feel input latency.  Remember Intel showing off their new graphics running a game and it ended up that it was just running a video of a game playing?

  • TheLizardbonesTheLizardbones Member CommonPosts: 10,910


    Originally posted by Quizzical

    Originally posted by lizardbones

    According to Nvidia, a single Grid server can enable up to 24 HD quality game streams. At CES Nvidia showcased the Grid gaming system, which incorporates 20 Grid servers into a single rack. Nvidia says the rack is capable of producing 36 times the amount of HD-quality game streams as 'first-generation cloud gaming systems.'
    Since there are 24 GPUs per unit, and each unit can put out 24 HD quality game streams, it seems like each player gets a dedicated GPU. The cost savings is in not having to buy a separate, physical video card per player. It's still a GPU per player, but the system is designed from the ground up around the idea rather than using off the shelf hardware. Another thing to note is that their presentation showed the Shield device hooking into the GRID cloud, not just the user's PC. That might be a big deal.  
    A cynic would point out that 36 times zero is still zero.

    A demonstration doesn't mean much in situations like this, as it doesn't let you feel input latency.  Remember Intel showing off their new graphics running a game and it ended up that it was just running a video of a game playing?



    Heh.

    Well, they've sold it to six different companies around the world, so if it works, I'm sure we'll know soon enough. Of course Nvidia is going to say it's awesome, but regular people are going to use the service and I expect those people will probably have things like Twitter and what not.

    ** edit **

    I'm not a cynic, but I reserve the right to change my mind at the drop of a hat.

    I can not remember winning or losing a single debate on the internet.

  • QuizzicalQuizzical Member LegendaryPosts: 25,355
    Originally posted by lizardbones

     


    Originally posted by Quizzical

    Originally posted by lizardbones

    According to Nvidia, a single Grid server can enable up to 24 HD quality game streams. At CES Nvidia showcased the Grid gaming system, which incorporates 20 Grid servers into a single rack. Nvidia says the rack is capable of producing 36 times the amount of HD-quality game streams as 'first-generation cloud gaming systems.'
    Since there are 24 GPUs per unit, and each unit can put out 24 HD quality game streams, it seems like each player gets a dedicated GPU. The cost savings is in not having to buy a separate, physical video card per player. It's still a GPU per player, but the system is designed from the ground up around the idea rather than using off the shelf hardware. Another thing to note is that their presentation showed the Shield device hooking into the GRID cloud, not just the user's PC. That might be a big deal.  
    A cynic would point out that 36 times zero is still zero.

     

    A demonstration doesn't mean much in situations like this, as it doesn't let you feel input latency.  Remember Intel showing off their new graphics running a game and it ended up that it was just running a video of a game playing?



    Heh.

    Well, they've sold it to six different companies around the world, so if it works, I'm sure we'll know soon enough. Of course Nvidia is going to say it's awesome, but regular people are going to use the service and I expect those people will probably have things like Twitter and what not.

    ** edit **

    I'm not a cynic, but I reserve the right to change my mind at the drop of a hat.

     

    Sold it to six companies, but to do what exactly?  If companies are buying it for graphics virtualization in a corporate environment where you don't have to do much more than display the desktop and office applications, that's a long way away from streaming games over the Internet.  Even 1/9 of a fairly weak GPU surely beats no GPU.

    And even if companies are buying it to stream games over the Internet, OnLive has bought a bunch of video cards for that in the past, too.  We saw how that turned out.

  • TheLizardbonesTheLizardbones Member CommonPosts: 10,910


    Originally posted by Quizzical
    Originally posted by lizardbones   Originally posted by Quizzical Originally posted by lizardbones According to Nvidia, a single Grid server can enable up to 24 HD quality game streams. At CES Nvidia showcased the Grid gaming system, which incorporates 20 Grid servers into a single rack. Nvidia says the rack is capable of producing 36 times the amount of HD-quality game streams as 'first-generation cloud gaming systems.'
    Since there are 24 GPUs per unit, and each unit can put out 24 HD quality game streams, it seems like each player gets a dedicated GPU. The cost savings is in not having to buy a separate, physical video card per player. It's still a GPU per player, but the system is designed from the ground up around the idea rather than using off the shelf hardware. Another thing to note is that their presentation showed the Shield device hooking into the GRID cloud, not just the user's PC. That might be a big deal.  
    A cynic would point out that 36 times zero is still zero.   A demonstration doesn't mean much in situations like this, as it doesn't let you feel input latency.  Remember Intel showing off their new graphics running a game and it ended up that it was just running a video of a game playing?
    Heh. Well, they've sold it to six different companies around the world, so if it works, I'm sure we'll know soon enough. Of course Nvidia is going to say it's awesome, but regular people are going to use the service and I expect those people will probably have things like Twitter and what not. ** edit ** I'm not a cynic, but I reserve the right to change my mind at the drop of a hat.  
    Sold it to six companies, but to do what exactly?  If companies are buying it for graphics virtualization in a corporate environment where you don't have to do much more than display the desktop and office applications, that's a long way away from streaming games over the Internet.  Even 1/9 of a fairly weak GPU surely beats no GPU.

    And even if companies are buying it to stream games over the Internet, OnLive has bought a bunch of video cards for that in the past, too.  We saw how that turned out.




    One of the companies is Ubitus, who are working with Google to provide cloud based gaming. Another one is named PlayCast, which certainly sounds like a gaming company. At least one company will provide cloud based gaming services for sure (Ubitus).

    I think the key selling point is the cost. This is supposed to be cheaper than what OnLive tried to do. Not only are the systems cheaper, but they are fairly energy efficient as well. You get more video power than the equivalent number of Xbox 360s, but at 1/5th the power consumption. OnLive struggled against trying something new, and using nearly off the shelf hardware.

    That doesn't mean it'll work. The service isn't going to be free.

    I can not remember winning or losing a single debate on the internet.

Sign In or Register to comment.