Virtual I/O Goes Native Ethernet

Arthur Cole

It may seem like an odd thing to say after all these years, but it seems that enterprises will finally have the means to deploy virtualization in a big way.


While the adoption rate for virtualization is very high -- upwards of 90 percent of enterprises have deployed the technology, according to some estimates -- the percentage of servers that are actually virtual is hovering around the 40 percent mark. That means most enterprises are intrigued by the technology's benefits, but are still hesitant to trust it fully.


No doubt, security and continuity are major contributors here, but there is also the fact that most networks, even those with 10 GbE deployed, still cannot handle the massive increase in data loads that legions of virtual machines produce. And while companies like Xsigo and NextI/O provide I/O virtualization technologies to help direct this traffic, they also require a fair amount of hardware to convert their InfiniBand- or PCIe-based systems to Ethernet.


Until now, that is. Xsigo just announced an Ethernet version of its I/O Director that cuts deployment costs down from about $1,500 per server to $500. This is primarily due to the fact that the device plugs directly into the Ethernet ports off the back of the server, avoiding the need to converge network adapters (CNAs) and other devices that have thus far been the only way to converge data streams onto the Ethernet.


To be sure, there are still those who are pursuing I/O virtualization on platforms that require separate conversion to Ethernet. Virtensys, for example, offers an improved I/O switch, the VIO-4004, that taps into servers via PCIe and delivers Ethernet service at upwards of 80 Gbps per server. The advantage here, according to Virtensys, is that servers become abstract, stateless computing nodes that can be more easily consolidated and pooled for cloud services and other advanced uses. The system is optimized for Intel's X520 Ethernet adapter and the new vSphere 4.1 platform.


I/O virtualization is likely to become a crucial component in new "bursting" practices that many enterprises are adopting, according to Storage Switzerland's George Crump. Rather than simply build out a massive infrastructure capable of handling the most extreme peak loads, you target resources for normal loads and keep spare capabilities on hand to handle spikes. With virtual I/O on board, you deploy a gateway to direct traffic across multiple cards, with maybe one or two held in reserve to pick up any overflow. Sure beats having to outfit every server with its own HBA.


The good news is that all the latest virtual I/O solutions are targeting increased data flow and reduced capital costs as primary objectives. Perhaps when the rest of the data center has caught up to the advances virtualization has brought to the server farm, implementation levels will increase to where we will start to see the kind of flexibility and cost-benefit that were originally promised.



Add Comment      Leave a comment on this blog post
Sep 4, 2010 11:35 AM Ken Oestreich Ken Oestreich  says:

Excellent snippet illustrating the advantages of virtual I/O coupled with Ethernet... avoiding the need for expensive CNA's and FCoE hardware.

I should point out that this isn't the first instantiation of this technology -- Egenera has been embedding virtual I/O + Ethernet since 2008. By coupling this technology with a provisioning engine, virtual I/O enables server failover + disaster recovery as well.

Virtual I/O will be the enabler of additional data center functions in the future.

(Full disclosure: I work for Egenera)

Reply

Post a comment

 

 

 

 


(Maximum characters: 1200). You have 1200 characters left.

 

null
null

 

Subscribe to our Newsletters

Sign up now and get the best business technology insights direct to your inbox.