Continuity in Virtual Environments

Arthur Cole

To say that virtualization is now a common facet of data center infrastructure may be factual, but it is not an entirely accurate depiction of the current level of acceptance the technology has garnered.


The fact is, most virtual environments are still relegated to largely non-critical workloads. And that points up to one of the main drawbacks of virtual environments: that while application availability is much improved, failover issues are holding it back from assuming a truly commanding role in the enterprise.


As Stratus Technologies' Phil Riccio points out, virtualization has two main problems when it comes to failover. First, it can take up to 10 minutes to provision a new VM on most platforms, an unbearably long time for critical apps like e-mail and messaging. And more than likely, most data not saved on disk will be lost. Secondly, a hardware failure in a virtualized environment is a much more serious matter because it can take down multiple VMs and affect a much wider range of users and applications. That's part of the reason he recommends virtualization combined with fault-tolerant hardware for anyone looking to use VMs for critical apps.


Improving virtual failover capabilities has become something of a trend among the top platform providers as we head into the new year. Citrix recently upgraded its Essentials management stack with a new high-availability system called StorageLink, and plans to integrate it into the System Center stack for use with the Hyper-V platform within a few months. The system acts as a site-recovery tool for virtual environments, providing VM failover and recovery using Hyper-V's live-migration capabilities.


Meanwhile, Microsoft is shoring up its own failover capabilities as it seeks to match VMware's Site Recovery Manager (SRM) feature for feature. The company recently acquired automation firm Opalis, gaining access to a number of lifecycle-management and system-automation technologies that it lacked. Part of the package is a system called Run Books, which, according to Artemis Technology's Elias Khnaser, would make an effective failover/fallback tool for critical workloads.


For its part, VMware is claiming most customers are reporting improved availability and continuity capabilities under its new virtual platforms. The company polled more than 300 of its small and medium-sized customers and found that 71 percent reported improved application availability, while 67 percent pegged continuity and preparedness is a key benefit.


Virtualization is bound to make its way onto critical applications and workloads before too long, so it's important to realize that overall systems architecture will be a key factor in maintaining continuity and availability. Properly designed systems will spread important data across as many resources as possible, to eliminate single points of failure that would cause widespread damage should they go down.


The benefit of virtualization is that it is flexible enough to provide for the proper configurations to both prevent systems from failing in the first place, and work around them when they do.



Add Comment      Leave a comment on this blog post

Post a comment

 

 

 

 


(Maximum characters: 1200). You have 1200 characters left.

 

null
null

 

Subscribe to our Newsletters

Sign up now and get the best business technology insights direct to your inbox.