More

    New Ways to Boost VM Density

    It may no longer be the focal point of enterprise infrastructure development, but virtualization remains the platform on which many, more fashionable, trends are built. If you intend to foster cloud computing, mobile technology or systems/resource convergence, you’d better make sure your virtual architecture is up to the task first.

    Virtualization’s role in the top data and infrastructure initiatives of the day is evidenced by the fact that more and more of the physical plant is becoming virtualized, even though fears of overloading network and storage resources is still prevalent. According to the Aberdeen Group, more than half of all servers are currently virtualized, with current deployment plans on track to push it past 70 percent upon completion. This is having an effect on server markets in that demand is increasing for hardware that can handle highly dense virtual architectures.

    Indeed, this is one of the primary reasons why so many enterprises are turning to server-side Flash technologies. Large numbers of VMs tend to overwhelm server and network I/O capabilities, something that Flash-based cache architectures are designed to address. Proximal Data’s AutoCache software, when coupled with solid-state drives like Micron’s P400e and P230H, eliminates I/O bottlenecks in SATA and PCIe environments, allowing servers to triple the number of VMs they can hold. At the same time, it requires no agents in the guest OS that tend to slow down operational performance by tying up system resources.

    Even off-server Flash can help with the virtual I/O load. Astute Networks recently released a new generation of its ViSX G4 Flash VM storage appliance, designed to provide up to 140,000 sustained random IOPS. The device features a proprietary ASIC called the DataPump engine used to offload and accelerate iSCSI and TCP processing, improving VM density by a factor of 10 and boosting performance to the level needed for top-tier, business-critical applications. It also has the ability to share performance acceleration across multiple servers, VMs and applications without disrupting operations.

    Sometimes, however, when you need to catch more mice, it’s best to build a better mousetrap. IBM says it has taken this approach with the PowerVM hypervisor, designed for the Power Systems platform. The company says it can best VMware, Microsoft, Xen and Red Hat when it comes to deployment, scale, management, availability and a number of other parameters, more than doubling VM density in medium and large deployments. According to IBM’s own research, PowerVM consumes about a third less CPU and memory resources than a similar ESXi configuration, and maintains or increases the advantage as the virtual environment increases in size.

    Clearly, there is great interest in pushing virtualization into new territory, fear of virtual sprawl be damned. With enterprises under increasing pressure to deliver new levels of cloud computing, collaboration and mobile dexterity, all without breaking either capex or opex budgets, virtualization has become more crucial to the overall data environment than ever.

    It may not be the leading technology in the data center anymore, but it still generates enough heat to power the development of all the advanced architectures coming our way.

    Arthur Cole
    Arthur Cole
    With more than 20 years of experience in technology journalism, Arthur has written on the rise of everything from the first digital video editing platforms to virtualization, advanced cloud architectures and the Internet of Things. He is a regular contributor to IT Business Edge and Enterprise Networking Planet and provides blog posts and other web content to numerous company web sites in the high-tech and data communications industries.

    Get the Free Newsletter!

    Subscribe to Daily Tech Insider for top news, trends, and analysis.

    Latest Articles