The beauty of the virtual machine is that it can go anywhere and do almost anything. Need to run a business application? Launch a VM. Need additional support for analytics? Put it on a VM.
And with the latest developments in software-defined storage and networking, this same ethos will soon apply to infrastructure. Need a new data center in Europe? A VM in the cloud can provide it. Is backup and replication lacking somewhere? There are VMs already waiting for you.
In fact, the rise of virtual infrastructure is leading to a groundswell of new services and management regimes aimed at providing near-instant data center functions just about anywhere they are needed. At the same time, the cost of building and maintaining state-of-the-art systems has fallen dramatically from the bricks-and-mortar days.
Take a company called Pneuron Corp., as an example. The firm has developed a data extraction and transfer system built on Java-based VMs that the company calls Cortex servers. The devices utilize data query, web service and other applications to enable data access and management on a global scale – literally, wherever a VM can be deployed. The platform runs on everything from Windows and Linux to vSphere and Oracle VirtualBox, supporting database platforms like MySQL, Postgres, Oracle 10g and 11g and Apache Derby. The company is angling for organizations looking to avoid the high cost of infrastructure development caused by the patchwork of legal and policy regimes that exist on the international scale.
But even firms simply looking to extend traditional data environments into the cloud are starting to see the value in broad hypervisor support across disparate infrastructure. In fact, this is what the entire hybrid cloud concept is all about, with providers like Skytap deploying vast numbers of VMs just to keep up with demand. With advanced management techniques like the Intelligent Automation Platform, the company aims to provide a broadly scalable and elastic resource pool that is subject to the full control and security requirements of the enterprise client.
None of this is lost on established enterprise vendors who are looking to gain an edge on leading cloud providers like Amazon in the race to secure the business data market. A prime example is Microsoft, which recently launched an Infrastructure-as-a-Service platform on its Azure Cloud in what many see as a direct challenge to Amazon Web Services (AWS). Most significantly, Microsoft is allowing Linux-based workloads along with its own Azure Virtual Machines, a sign that the company recognizes that cloud-based revenue streams are best served by allowing clients to build out data environments in their own fashion.
VMs are also highly adept at providing advanced, highly configurable networking infrastructure, a boon to enterprises wondering how they will manage the increased traffic caused by Big Data analytics and the flood of mobile devices. ProfitBricks, for example, now offers high availability virtual networking using the Address Resolution Protocol to enable configuration-free routing of IPv4 addresses to virtual machine architectures. In this way, knowledge workers who constantly shift loads between on-premise VMs can now do so in the cloud because IP addresses are no longer tied to a specific VM. Company execs say this breaks down the last barrier to putting mission-critical apps in the cloud.
The idea of the anywhere/anytime VM has been the prime motivation behind most of the virtual/cloud development over the past decade. And now that VMs can function as network devices, storage controllers and other infrastructure elements, there is nothing standing in the way of the fully virtual data environment.
To be sure, there are many questions as to how these environments are to be configured and managed, but experimenting with new designs is significantly easier in the virtual world than the physical one. And as infrastructure agility takes precedence over raw computing power, broad VM support will be a key factor in the health of enterprise infrastructure.