Choosing the Right Network for Virtualization

Arthur Cole

While the decision to virtualize the server farm is an easy one (more processing from the same hardware equals higher efficiency), some of the secondary considerations are not so clean-cut.


And the first major decision that the newly virtualized will face is what to do about storage networking. All those virtual machines vying to access data over the same old infrastructure are bound to start tripping over each other, leading to bottlenecks that hamper the very efficiency that virtualization was supposed to provide.


The stock answer to this problem is I/O virtualization -- software that divvies up network paths to handle multiple I/O requests. But before you even get to that point, you have to decide what kind of storage networking technology you want to build it on. And this is where you have to sort out a lot of conflicting claims and counter-claims.


The iSCSI crowd says their technology provides the optimal solution for virtual server environments by virtue of the fact that most server motherboards have embedded Gigabit Ethernet ports, so you don't have to deal with adapter cards in a limited number of expansion slots. And as David Dale, chairman of the SNIA's IP Storage Forum, points out in this article, even entry-level iSCSI arrays offer advanced data management capabilities, like point-in-time and remote copy, LUN cloning and asynchronous mirroring.


The one thing you won't hear from iSCSI backers is that placing storage traffic on top of everything else on the Ethernet (VoIP, data, video conferencing) will likely tax that infrastructure. 10 GbE is supposed to be able to handle it, but how much will it cost?


Fibre Channel is a dedicated storage protocol, but again, there is that matter of too few expansion slots to hold the needed adapter cards as networks scale up. It also doesn't help matters that some suppliers are apparently playing fast and loose with the performance numbers. This report from QLogic touting near-native FC I/O performance on Hyper-V was quickly debunked by The Burton Group's Chris Wolf, who pointed out that the feat was achieved using solid-state storage and tiny 512k blocks of data.


A third option is to couch everything in Infiniband. Its primary backer, Mellanox, has been busy pushing the technology from its traditional perch in high-performance installations down to more mid-level environments. The company recently tied its 20Gb adapters to Galactic Computing's VStor storage and gateway line, where it provides an umbrella for iSCSI, Fibre Channel, SAS or virtually any other networking format. The company says its pricing will be competitive with 10 GbE, but it still would be yet another networking layer to contend with.


Still another option is the Network File System. This new class of appliance, described here by NewsFactor's Gary Orenstein, uses techniques like scalable caching to maintain image files for virtual machines. A key advantage to this approach is that it avoids the constant creation and provisioning of logical unit numbers (LUNs) that both iSCSI and Fibre Channel use to keep track of data. It also has the ability to expand and decrease volumes on the fly and provide advanced disk management tools, like thin provisioning, by default.


The easiest thing now would be for me to tell you which option is best and leave it at that. But no can do. There are too many variables at play. Budgets, service requirements, existing infrastructure all play into your decision. The only thing I can say for certain is that storage networking is a must if you plan on getting the most out of your virtual servers. The rest is up to you.

Add Comment      Leave a comment on this blog post

Post a comment





(Maximum characters: 1200). You have 1200 characters left.



Subscribe to our Newsletters

Sign up now and get the best business technology insights direct to your inbox.