The cloud is now such a common facet of enterprise infrastructure that it seems many organizations have lost sight of the fact that services and service providers can vary greatly in terms of quality, reliability and overall functionality.
But just as the IT team puts traditional infrastructure proposals through a rigorous evaluation process, so too should the cloud be subjected to a comparative analysis – not just against legacy systems but against rival services and providers. To date, however, there has been very little in terms of guidance or tools and technology to accomplish this.
Naturally, the open source community is furthest ahead when it comes to evaluating cloud offerings, particularly as the number and diversity of open services increase. The Open Data Center Alliance (ODCA), for example, recently teamed up with Intel to develop various usage models for the Intel Cloud Finder Program, which seeks to unite users with participating service providers. In conjunction with a new related collaboration with Appnomic, this gives users the ability to evaluate numerous cloud offerings according to QoS performance, available services, user experiences and various other metrics to ensure a proper fit between provider capabilities, user needs and legacy infrastructure capabilities.
Of course, the first thing enterprise executives need to realize about the cloud is that not all services are the same, says Luc Halbardier, head of Luxembourg turnkey provider POST Telecom. Location of the provider’s data center, for one, is a crucial factor when it comes to latency, legal and regulatory issues and even security and data protection. Basic infrastructure capabilities matter as well, depending on the level of integration that is required and the management capabilities you hope to establish.
One of the worst things you can do is let pricing be the sole determinant in your deployment decision, says tech consultant David Linthicum. Leading public providers like Google and Amazon are racing to the bottom – in essence trying to push demand from an enterprise community that would otherwise be looking elsewhere. This may be fun to watch, but beware of joining the party because it could lead to service dependencies that may prove detrimental in the future. Low cost does not always translate into high value.
Many organizations also fall into the trap of calculating cloud ROI using the same methods that apply to legacy infrastructure. This comparison usually favors the cloud, but is it a case of pitting apples against oranges? According to tech writer Doug Bonderud, the time has come to develop a cloud-specific ROI, given that the monetary return of cloud vs. legacy hardware plays out over such dramatically different time scales. As many C-level executives are finding out, costs associated with “soft-side ROI” tend to creep up the longer the cloud service is in use, and in some cases can actually exceed the “hard savings” that were used to justify the cloud in the first place. Particularly as shadow IT and other service permutations take hold, it could very well be that the enterprise is paying a lot more for cloud services than it realizes.
The cloud is an entirely new data paradigm, so it is only common sense to avoid viewing it through the same set of assumptions and usage patterns that existed under the physical regime. It will probably take time to work out all the calculations needed to establish clear and effective deployment and operational scenarios, but the good news is that for the most part the cloud is flexible enough to provide experimentation with multiple strategies until the enterprise hits pay dirt.
True cloud value is achievable, provided the enterprise does not commit to something that is difficult, or impossible, to undo.