Network Enhancements for Better Virtualization

Arthur Cole

Virtualization is intended to improve data center performance by forming an abstract, and malleable, layer between applications/software environments and underlying hardware. The technology has proven so successful because it works so well.

Too well, in fact. In an odd conundrum, many enterprises are realizing that the more they virtualize, the more they begin to hamper application performance due to the increased traffic flooding network and storage infrastructure.

Addressing this problem has taken many forms. The most direct is to simply add more of whatever you're lacking. Yet this not only gets expensive, but it also adds to infrastructure complexity at a time when simplicity and consolidation are the order of the day.

Instead, many organizations are turning to various network cache approaches designed to improve data flow over existing resources. Tools like Cache IQ's RapidCache, for instance, are working to improve NAS performance through DRAM and SSD-based cache that pulls some of the load from storage subsystems. A key element of the system is an intelligent data analysis module that identifies data sets that are appropriate for cache. It also can be deployed without having to reconfigure existing NAS infrastructure.

Astute Networks takes a similar tack with its ViSX G3 appliance. It also provides a flash memory module, but it's tied to an offload processor, the DataPump Engine, that speeds TCP/IP and virtualized iSCSI traffic. The company contends that it can provide higher IOPS performance by locating the unit in the network where it can be applied to multiple servers.

Yet another caching technique comes from Fusion-io, which has repackaged technology recently acquired from IO Turbine into the new ioCache system. The idea here is to provide a scalable cache module earmarked for only the most-data-heavy applications. The system utilizes the company's existing Virtual Storage Layer (VSL) subsystem as an on-server memory boost that can be tapped without having to negotiate storage or networking protocols. This has the benefit of consolidating mapping and even increasing the number of VMs per server.

Then there is the virtual networking approach championed by companies like Xsigo. The firm's I/O Director utilizes a 40 Gbps link to each physical server and then provides a virtual fabric structure that allows admins to reconfigure network connections as data loads rise and fall. This offers the twin benefits of keeping bottlenecks to a minimum and reducing the amount of cabling and network infrastructure typically required for virtual environments.

By its nature, virtualization provides a highly dynamic, fluid data environment, so there is no single answer to the problem of network/storage bottlenecks. One thing is certain, however: Without a solid plan to handle increased traffic loads, the benefits of virtualization will quickly max out.

Add Comment      Leave a comment on this blog post

Post a comment





(Maximum characters: 1200). You have 1200 characters left.



Subscribe to our Newsletters

Sign up now and get the best business technology insights direct to your inbox.