If 2011 is to be the year the cloud moves from the proof-of-concept phase into deployments of actual working environments, the issue of vendor lock-in will have to take center stage on the part of both platform and service providers.
After all, the only way in which the cloud can gain any traction at all, at least during this early phase, is to be seen as simply another extension of data center infrastructure. Without the flexibility to scale up resources at a moment's notice and shift data loads across multiple environments, the cloud is simply a more complicated version of existing off-site storage and processing services.
Of course, this goes against the grain of top enterprise platform providers that have made comfortable livings these past 30 years or so keeping customers safely within the confines of their branded hardware and software. And while broad cloud interoperability is seen as a worthy goal across the board, top vendors have done very little to ensure their platforms can safely and easily hand data and applications over to their rivals.
Enter third-party innovation. New generations of cloud management systems are intent on making that happen. Companies like CloudSwitch are upping the ante when it comes to ensuring seamless data operations regardless of whether resources are local or cloud-based. The company just came out with version 2.0 of its enterprise software designed to enable point-and-click provisioning of Windows and Linux apps, essentially making it seem as if all systems are run locally even though resources may be hosted halfway around the world. The system is available as a downloadable software appliance for either VMware or Xen environments.
There's also a company called Racemi, which uses an image-based provisioning framework in its DynaCenter system to simplify cloud provisioning and data migration. The process delivers an image of an entire server stack to a target resource, where it can be rebooted with all OSes, configurations and applications in place, rewriting network and storage configurations and all necessary drivers on the fly. The company says this is a step up from traditional scripted provisioning in that it avoids much of the manual coding required to keep systems up-to-date-a process that is likely to get worse in free-wheeling cloud environments.
To enable broad compatibility among cloud platforms, however, there will need to be widespread embrace of the kind of open technology that has had such a hard time gaining a foothold in traditional infrastructure. But as CIO UK's Bryan Cruickshank points out, there has been a number of positive developments on this front recently. Salesforce, for one, recently adopted ISO 27001 standards and recently completed, along with Amazon Web Services, its SAS 70 certification. And trade groups like the Storage Networking Industry Association (SNIA) and the Open Grid Forum are coming out with a range of compatibility and interoperability standards.
An open cloud isn't just desirable, it's necessary, says LinuxInsider's Sheng Liang. Closed systems by nature are more rigid and expensive than open ones. Since the whole reason for embracing the cloud is to increase flexibility and lower costs, systems that do not excel at those key metrics will have trouble finding a market. A cloud based on commoditized, open-source components is the only one that stands a real chance of widespread adoption.
But when it comes to mixing and matching those environments with the resources available on the cloud, the chances are very good. It's either that, or no real cloud at all.