Given the state of virtual and cloud-based infrastructure, it’s almost impossible not to think about end-to-end data environments residing in abstract software layers atop physical infrastructure.
But is the virtual data center (VDC) really in the cards? And if so, does it mean all data environments will soon gravitate toward these ethereal constructs, or will there still be use cases for traditional, on-premises infrastructure?
Undoubtedly, a fully virtualized data operation offers many advantages. Aside from the lower capital and operating costs, it will be much easier to support mobile communications, collaboration, social networking and many of the other trends that are driving the knowledge workforce to new levels of productivity.
RackSolutions’ Katrina Matthews says it is important to remember that while the technology to support the virtual data center certainly exists, we still have quite a bit of software development to complete. The VDC needs to accommodate not only all of the new applications and services, but many of the old ones as well. Once the data center is nothing more than a massive software program, it will need to integrate directly with physical servers, storage and networking—a good part of which has not even been brought into the basic virtual fold yet. Part of this process, then, will require the complete virtualization of legacy infrastructure.
Nevertheless, many top cloud providers are eager to get the ball rolling, hoping to engage the enterprise community with the next level of service. Windstream Hosted Solutions, for example, has introduced new provisioning and automation tiers that enable customized data center environments to be created in a matter of hours, rather than days. The package allows customers to spin up their own virtual machines, operating environments and even network configurations as opposed to traditional stock resources that are most common in the cloud. In this way, enterprises have a greater ability to build integrated data environments that can be tied directly to legacy environments, thus avoiding cloud-level data silos.
These kinds of personalized services may be crucial for local and regional cloud providers hoping to compete in the VDC space, especially considering that nearly all the major telecom providers are eyeing it as well. Everyone from Verizon and CenturyLink to T-Systems and TeliaSonera is either building or buying data center capabilities in the hopes that organizations will have to obtain new data capabilities very quickly in order to meet their growing user requirements. The question remains, however, whether they can leverage their networking knowledge for full data center service delivery.
Indeed, some of the largest organizations might not need third-party VDC services at all, but may decide to simply build their own. The U.S. Department of Defense (DoD) is blazing a trail in this direction with the new milCloud project run by its own Defense Information Systems Agency (DISA). At the moment, the service resides on the department’s unclassified NIPRnet, although it shouldn’t be long before the classified version goes live on the closed SIPRnet network. DoD agencies will request services through DISA, which will then forward requests either to its own resources or to approved third-party providers.
Is this really the way of the future, though? Will local data centers soon go the way of the dinosaur, edged out by nimbler, more adaptive competition?
It is not outside the realm of possibility. Even organizations that refuse to entrust their data to outsiders won’t be able to resist the low cost and flexibility that full virtual solutions offer. And as the hardware refresh cycle winds through its inevitable course, it will probably be the rare CIO who doesn’t look at VDC capabilities and ask, why not?