Nvidia and the Super Data Center

Arthur Cole

Now that Nvidia has upped the ante with a next-gen GPU architecture that is equally at home with straight data as it is with graphics, the question for the enterprise industry is whether it will make a significant difference in data center performance.

In short, the answer is yes, but only if it can draw significant support from the development community.

Nvidia's new Fermi architecture does pack an impressive punch: three billion transistors working in parallel, 512 CUDA cores with floating-point capability, a concurrent kernel executive mechanism and support for C++, C, Fortran, Java and other popular programming languages. They've also thrown in error-correction code (ECC), a new parallel cache heirarchy for improved ray-tracing and a full 1 TB memory address. Final specs haven't been released yet, but so far it looks like any software firm would be foolish not to investigate it as a development platform.

One major backer looks to be Microsoft, which has announced an agreement to support the Tesla GPU, which should start sporting the Fermi architecture early next year, in its Windows HPC Server 2008 OS. Redmond execs say they are eager to delve into both the parallel and multicore capabilities of the Tesla line for applications ranging from data mining and business intelligence to high-end modeling and financial number crunching. Fermi is also heading for Microsoft Visual Studio 2008 via the Nexus development environment.

Another key question is how well Fermi stacks up against GPU offerings from Intel and AMD. Pretty well, according to some of the leading microprocessor analysts. In-Stat's Tom Halfhill told PC Magazine that the toughest competition will come from Intel's Larrabee chips due out next year. Intel plans on building its GPUs on x86 cores, however, rather than specialized 3D architectures, so it's tough to do a head-to-head comparison until both chips are in the channel.

AMD, meanwhile, has gotten mostly good reviews for its latest ATI Radeon line-up, although most analysts see it as a pure graphics play rather than a general-purpose solution. Still, 2.72 trillion calculations per second is nothing to sneeze at for an enterprising developer looking to tap into some unbridled horsepower.

Nvidia still hopes to keep a hand in the graphics market as well, but the real prize here is the enterprise. The goal is nothing less than bringing supercomputing capabilities to the run-of-the-mill data center.

The GPU can certainly provide the foundation. Now it needs some cooperation between developers and customers to make it happen.

Add Comment      Leave a comment on this blog post

Post a comment





(Maximum characters: 1200). You have 1200 characters left.



Subscribe to our Newsletters

Sign up now and get the best business technology insights direct to your inbox.