The Mix-and-Match Approach to Virtualization

Arthur Cole

Now that Microsoft has decided to issue its Hyper-V hypervisor for free, leading to a virtual stampede of third-party support, it seems more likely than ever that enterprises soon will be dealing with multi-vendor or heterogeneous virtual environments.


This is likely to be both a help and a hindrance as users will be better able to construct the appropriate environment for a particular task, even as the job of managing this newfound complexity will become increasingly challenging.


Still, most observers view the rise of heterogeneous virtual environments as inevitable, primarily because more and more hardware and software vendors are starting to plan for it. In fact, the way things are going now, very soon you're likely to be hard-pressed to find a piece of enterprise equipment or an application that doesn't support ESX, Hyper-V, Xen and/or Virtual Iron.


HP is clearly on board, having broadened its hardware and software offerings to work comfortably across a range of platforms. The company is broadening its support for Hyper-V on the ProLiant and BladeSystems servers, as well as the StorageWorks and Thin Client lines, not to mention its Business Technology Optimization portfolio that includes its management and automation software.


Simple management, though, isn't likely to be enough. How, for example, will anyone be able to tell which environment is the right one for a particular application? And how will you know if, say, VMware represents the best use of system resources? Those are the kinds of questions CiRBA is looking to answer with a new set of packaged analysis templates for its Placement Intelligence system. The templates will let you compare Hyper-V vs. VMware virtualization on specs like resource limits, memory commitment and workload mobility to see which one is better suited for a given environment.


Heterogeneous environments don't end at the server, either. There's a strong argument to be made for mixing things up in the storage array. As RELDATA CEO David Hubbard points out here, multiple heterogeneous arrays allow you to tier multiple levels of virtual storage on a single system. Plus, you can get away with things like moving volumes between physical resources without disrupting operations.


Like it or not, IT will become a lot more flexible over the next decade as the old silo architecture falls away. Keeping this new environment under control will require more than just new tools and new technologies, but a new way of thinking as well.

Add Comment      Leave a comment on this blog post
Sep 12, 2008 1:29 AM Kennyo Kennyo  says:
Amen! The "VM sprawl" solution requires having to manage heterogeneous VMs. It's just a matter of time; Glad to see the industry is catching up with that reality. *But* don't forget one other component: The "forgotten 40%" -- essentially the ~ 40% of infrastructure that *won't* ever be virtualized (for whatever reason). The new breed of management tools have to be capable of managing physical -and- virtual under the same pane of glass. Reply

Post a comment





(Maximum characters: 1200). You have 1200 characters left.




Subscribe to our Newsletters

Sign up now and get the best business technology insights direct to your inbox.