Quantcast

Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!

did IBM just reset computing ?

drbaltazardrbaltazar Member UncommonPosts: 7,856
Data centric is their new goal,the aim?to make data static and processing dynamic,I was gona ignore this Engadget thread but its IBM ,ya grandfather ibm the sleeping dragon of techno !it look like the giant just woke up.dam can it really even be possible?

Comments

  • RusqueRusque Member RarePosts: 2,785

    Sounds nifty, here's the press release from IBM since OP did not provide a link: https://www-03.ibm.com/press/us/en/pressrelease/45387.wss

     

     

  • PhryPhry Member LegendaryPosts: 11,004
    I suspect its more a case of theoretical rather than actual, and i doubt whether it will be heading our way any time soon, give or take a decade. Sounds interesting though, and if Nvidia do manage to make it work, it will give them total dominance in GPU tech.image
  • NetSageNetSage Member UncommonPosts: 1,059
    Even if they find a practical way of doing it.  We won't see it in the regular consumer market for like 10 years.
  • RidelynnRidelynn Member EpicPosts: 7,057

    It sure looks to me like IBM is just selling Google-style data centers to enterprise clients who may want to use something other than trusting "The Cloud".

    I don't really see anything that resets anything here. It mentions nVidia, but that doesn't automatically mean it's associated with gaming. In this case, think OpenCL/CUDA/massively parallel algorithms (which have been on something like Amazon EC for a long time, and this would be a very attractive hardware option for Amazon to expand their capability, or other companies to offer or tap similar capability).

    Not that I am dismissing IBM - I think they are still very clearly a leader in their niche, I'm just not in any circles that involves the big computing/large enterprise requirements that IBM fulfills.

  • RecklooseReckloose Member UncommonPosts: 39

    Nope. Reading the whole press release, it's very literally just a huge cluster-f of useless buzzwords. In the end, it sound like a typical contract to set up a new datacenter, install all the hardware, set-up the virtualization framework, install the servers, DC's and so forth, and then migrate from the old datacenter to the new. This is really not that special, and is idiotically frequent with governmental entities (I did a stint as a dell contractor, and governmental entities do this every few years, then they do no maintenance on the systems, so they fall apart, and to "fix" them, they just have an entirely new datacenter built out.)

    And just take the word "datacentric" and shove it up IBM's butt, which is exactly where it came from. (Did I mention I loathe buzzwords?). Since it's the department of energy, and not NASA, "data-centric" means databases, and databases really aren't "processed", rather they are written/read (and other companies have some really neat tech for this kinda stuff, way beyond IBM at this point). And pretty much each SAN maker has their own tech to alleviate the horror that is database I/O. From what I know, Nimble is at the head of the pack for SAN database I/O tech.

     

  • QuizzicalQuizzical Member LegendaryPosts: 22,078

    For Tesla cards, Nvidia's latest architecture is still Kepler.  They recently launched Maxwell, but that's still GeForce-only, though that will change.  Then after Maxwell, they'll have Pascal.  And then after that comes Volta.  And IBM announced something or other about those Volta-based Tesla cards that are still three architectures away.

    For comparison, three architectures of Tesla cards before Kepler takes you all the way back to the very first Tesla cards that were based on the same chip as the GeForce 8800 GTX that launched in 2007.  And that's assuming that you regard the GeForce 8800 GTX as being of a different archtecture from the GeForce GTX 280, which one might reasonably not do.

    So did IBM just do something?  Well, they're planning on doing something.  But it will be a while.  There's plenty of time for delays and cancellations.

  • megaraxmegarax Member UncommonPosts: 269
    Originally posted by Quizzical

    For Tesla cards, Nvidia's latest architecture is still Kepler.  They recently launched Maxwell, but that's still GeForce-only, though that will change.  Then after Maxwell, they'll have Pascal.  And then after that comes Volta.  And IBM announced something or other about those Volta-based Tesla cards that are still three architectures away.

    For comparison, three architectures of Tesla cards before Kepler takes you all the way back to the very first Tesla cards that were based on the same chip as the GeForce 8800 GTX that launched in 2007.  And that's assuming that you regard the GeForce 8800 GTX as being of a different archtecture from the GeForce GTX 280, which one might reasonably not do.

    So did IBM just do something?  Well, they're planning on doing something.  But it will be a while.  There's plenty of time for delays and cancellations.

    You are a very knowing person and I trust your advice. But please, a LT;DR on this one? :)

  • WizardryWizardry Member LegendaryPosts: 17,827

    This idea is already being done in various fields,apparently 17 super data companies are in works.

    Personally it all sounds like big nerd talk that can sound really cool on paper and make some people lots of money but imo will do almost nothing except look good in presentations and on paper.We already have research teams all over the world looking into ways to improve everything we do in life,simply adding a "super data" terminology in front of the research is to me vague and just a fancy money grab.

    As far as the US government awarding the contract,call me skeptical but all the other "outside" stuff is candy coating,the REAL truth has more to do with there so called "National Security" .The US government has been in paranoia stage for many years now and using the term "national security" as more an excuse for spending money than anything concrete.This also sounds like this so called security will be nothing more than a super data system to monitor people from all over the world and be able to gather research information on them very quickly.However you can't justify all this wasted spending just on paranoia ,so they have to add all the "other" benefits this new super data systems will bring.

    Never forget 3 mile Island and never trust a government official or company spokesman.

  • QuizzicalQuizzical Member LegendaryPosts: 22,078
    Originally posted by megarax
    Originally posted by Quizzical

    For Tesla cards, Nvidia's latest architecture is still Kepler.  They recently launched Maxwell, but that's still GeForce-only, though that will change.  Then after Maxwell, they'll have Pascal.  And then after that comes Volta.  And IBM announced something or other about those Volta-based Tesla cards that are still three architectures away.

    For comparison, three architectures of Tesla cards before Kepler takes you all the way back to the very first Tesla cards that were based on the same chip as the GeForce 8800 GTX that launched in 2007.  And that's assuming that you regard the GeForce 8800 GTX as being of a different archtecture from the GeForce GTX 280, which one might reasonably not do.

    So did IBM just do something?  Well, they're planning on doing something.  But it will be a while.  There's plenty of time for delays and cancellations.

    You are a very knowing person and I trust your advice. But please, a LT;DR on this one? :)

    That is the TLDR version.  The wall of text version would take three pages.

  • QuizzicalQuizzical Member LegendaryPosts: 22,078
    Originally posted by Phry
    I suspect its more a case of theoretical rather than actual, and i doubt whether it will be heading our way any time soon, give or take a decade. Sounds interesting though, and if Nvidia do manage to make it work, it will give them total dominance in GPU tech.image

    It's a supercomputer thing, not a consumer graphics thing, so it's not the sort of thing that 99%+ of the general public will ever have any reason to care about.  For connecting a GPU to a CPU in a system with one of each, PCI Express already works well, as it's both high bandwidth and low latency.  But if you have a supercomputer that with several hundred nodes that need to be connected, a simple PCI Express bus that is only designed to connect a few things can't do the job.

  • syntax42syntax42 Member UncommonPosts: 1,378
    Originally posted by Ridelynn

    I don't really see anything that resets anything here. 

    I agree.  The article just looks like they got a contract to build a better data center.  It definitely doesn't sound like any specific technology is being developed.  Whatever IBM is doing may not be anything revolutionary, but it should at least push the cutting edge of data center design methods. 

  • drbaltazardrbaltazar Member UncommonPosts: 7,856
    Oh!I miss understood then.i thought it meant that instead of one super computer say in ny ,the whole city citizen would get free server for said city and all those would be clustered together .sorry my bad.I felt it was very ps4 ish or Xbox one ish (OK not really since withoutdma and dca cloud is useless
Sign In or Register to comment.