The Nehalem and the Network

Arthur Cole

In normal times, the introduction of a dramatically more powerful processor would send enterprise managers into a tizzy trying to deploy it as rapidly as possible. But these aren't normal times -- and I'm not talking about the economy.


Intel unveiled 17 new quad-core Nehalem chips this week, now officially dubbed the Xeon 3500 and Xeon 5500, providing a significant boost in I/O with the removal of the front-side bus in favor of an integrated memory controller. This should allow enterprises to load even more virtual machines on Nehalem-equipped servers than they already have.


But therein lies the rub. In the olden days (all of five years ago), more processing power provided for a better, faster, stronger single machine housed in whatever piece of hardware that was available (PC, server, appliance, etc.) Nowadays, though, that piece of hardware can accommodate multiple virtual machines -- and it's true that those machines will benefit tremendously from the Nehalem's advanced architecture.


The problem arises when all those VMs start vying for the limited physical resources that connect the server to the rest of the enterprise. A number of companies have made a killing on the idea of virtual I/O technology that lets VMs share controllers, adapters and other connectivity systems. Now, it looks like the Nehalem will put even more pressure on those technologies, possibly forcing enterprises to swap them out entirely to gain the most advantage out of the new chips.


"Platforms like the Nehalem call for much different virtual solutions than are mainly deployed today, simply because they are so extensive and provide so much more computing power," Greg Scherer, chief technology officer at Neterion, told me in an interview last week. "They've fixed the CPU problem, now we have to fix the I/O problem to allow users to scale up these new systems."


That's why we're seeing a host of new networking solutions coinciding with the Nehalem launch. Neterion's bid is the third generation of its X3110 adapter, which Scherer says breaks the "glass ceiling" of I/O performance in relation to new CPU and hypervisor technology. It's a 10 GbE device that provides techniques like hypervisor offload and Virtual Ethernet Bridge (VEB) technology to migrate switching overhead off the hypervisor and onto the adapter. It also incorporates the company's IOQoS system that helps ensure SLA performance, and the Virtual Link Technology (VLT) that allows the device to appear as up to 17 independent Ethernet adapters.


Emulex was also quick to jump on the Nehalem bandwagon, announcing a new ASIC design for the 8 Gbps LightPulse and OnneConnect platforms optimized for the 5500 series chips. The new design takes advantage of the processor's advanced features, like PCIe 2.0, Intel's VT for Directed I/O system, and VPort technology that connects VMs to dedicated storage.


Mellanox is also on board, having already upgraded to 40 Gbps connectivity on its ConnectX and MTS lines of InfiniBand adapters and switches, which should provide ample headroom for large numbers of virtual machines. The company says a 40 Gbps fabric offers a 2x performance gain for the 5500 series chips compared to earlier 20 Gbps systems, even while it maintains the same power envelope.


To be sure, the Nehalems will provide the power boost that they advertise -- but they don't work in a vacuum. If you are planning to make the investment in processors to boost your virtualization capabilities, you'll need to consider investing in your network as well.



Add Comment      Leave a comment on this blog post

Post a comment

 

 

 

 


(Maximum characters: 1200). You have 1200 characters left.

 

null
null

 

Subscribe to our Newsletters

Sign up now and get the best business technology insights direct to your inbox.