Virtualization at a Fundamental Level

Arthur Cole
Slide Show

Seven Best Practices for Virtualization

Virtualization is taking IT to new horizons from which whole new sets of opportunities are coming into view.

Virtualization has been a godsend for the enterprise, particularly as it has struggled to keep costs down in the face of increasing data loads and rising energy costs. Still, there's no question that it has placed a heavy burden on physical infrastructure, which has had to shift its focus from single-handed processing and storage of data and applications to sharing the burden among disparate resources.

Significant changes are in the pipeline, however, as new approaches to data handling gain footholds on the most fundamental levels of IT infrastructure. Silicon, for one, has caught the virtual bug. After years of Moore's Law and the relentless pursuit of greater processing power, development has shifted toward more cooperative architectures that value data transfer and communication more than clock speed and circuitry.

Intel's new Xeon E5-2600 marks a watershed in this regard. The chip features an integrated I/O controller that supports the PCIe 3.0 interface that delivers 8 gigatransfers per second, bolstered by the Direct Data I/O architecture that moves data directly into processor cache rather than making a pit stop in main memory. The result is lower latency and less power consumption.

The device also sports Intel's Turbo Boost Technology and Node Manager power management, which work to shift power between cores and other components to more accurately reflect data loads. It can also tap into the Data Center Management stack that provides broad silicon-level visibility and dynamic load balancing across rack and cluster configurations.

All of this serves to boost the E5's performance some 80 percent over the existing 5600 platform, even while it cuts energy usage in half. Most users will see improved performance even in non-virtual environments, however the true gains will be seen in highly distributed architectures that rely on high-speed networking to ferry workloads across multiple resources.

Rival AMD, of course, is also working the virtualization angle, but has taken somewhat of a different tack. The company recently purchased server maker SeaMicro for $334 million with the idea of integrating its technology with next-generation Opterons into new fabric-based offerings for OEM customers. Initial reports describe these "building blocks" as card-sized devices outfitted with CPU, DRAM and customized ASICs designed to support highly virtualized environments.

At the same time, board- and card-level Ethernet solutions are ramping up to play major roles in newly virtualized architectures. HP has selected Broadcom's Flexnet LAN-on-Motherboard and adapter line for its ProLiant Gen8 servers, where they will support virtual, cloud and high I/O applications. The line includes the NetXtreme 1 GbE and 10 GbE controllers capable of delivering line-rate throughput across all available powers, as well as integrated 40 nm single-chip PCIe solutions.

These days, it is difficult to find an enterprise that has yet to implement virtualization on some level. That being said, few organizations have pushed their level of virtualization much above 30 percent or so, a nod to the fact that hardware still has its limits.

Pushing past those boundaries requires new thinking on the relationships between data, applications and the resources that support them. That means changes across the board, starting with the most basic components.

Add Comment      Leave a comment on this blog post

Post a comment





(Maximum characters: 1200). You have 1200 characters left.



Subscribe to our Newsletters

Sign up now and get the best business technology insights direct to your inbox.