Conventional thinking holds that cloud infrastructure is someone else’s problem. As long as service levels are being maintained, the enterprise need not worry about the design of the provider’s data center or the architectural underpinnings of its operation.
This is short-sighted at best and downright dangerous at worst. As long as your workloads are in the cloud, then the cloud data center is your data center. If it gets hacked, you get hacked. If it goes down, you go down. So far from simply porting data to the cloud and forgetting all the rest (and indeed, this is how many non-IT business managers do it), the enterprise should conduct a full review of any potential provider’s facilities to ensure they meet the same requirements that you would place on on-premises built-outs.
This issue could become critical in the coming years as organizations increase their use of both internal and external resources. According to 451 Research, spending on both the cloud and the local data center is due to rise in the coming decade, but there is a caveat: Spending on local resources is largely aimed at consolidation, the shrinking of physical footprints, and increased resource utilization, while spending on the cloud and colocation is aimed at pushing more of the routine, non-critical workload onto third-party infrastructure. But simply because these workloads are non-critical does not mean they are unimportant, so it should be incumbent upon IT to take stock of where data is going and how it is being supported.
One of the more unfortunate aspects of virtualization and the cloud is the tendency for even experienced enterprise managers to forget that, at some point, abstract architectures meet the physical world. As StackIQ’s Don MacVittie noted recently, all clouds are built on hardware, so even if your external data environment doesn’t come crashing down some day, poorly designed cloud centers can certainly affect performance, agility, scalability and cost. Debate continues to swirl around the merits of a large provider like Amazon or one of the legions of small, local providers, but even this is missing the mark. The real determinant is whether the infrastructure you are leasing is appropriate for the use cases you have in mind. If not, it’s probably best to shop around.
Part of this challenge is finding verifiable information on the inner workings of the cloud provider. They will undoubtedly put their best foot forward in order to gain your business, but there are few resources that can provide independent verification of physical infrastructure claims. Even the Uptime Institute’s tiered rating system had to be overhauled recently when it was discovered that some players were touting their Tier III certification for the design of their facilities while hiding the fact that the final construction actually met a lower standard. Going forward, the organization will designate certifications only upon completion of the data center.
Of course, one of the best ways to verify data center claims is to go see the facility yourself. As Fortrust’s Rob McClary points out, a proper inspection should cover three basic elements: operations management, equipment/infrastructure, and the reputation of the provider. Key information within these categories should include clarity on the provider’s maintenance windows, expertise and error mitigation, as well as equipment lifecycle policies, management platforms and both mechanical and electrical capabilities. It also helps to take a look at the financial health of the provider to make sure they are a sound partner for the long term.
At the end of the day, IT is still responsible for ensuring the health and viability of the enterprise data environment. Passing that responsibility to a third-party provider won’t win many points with the front office when their data platforms suddenly go dark. After all, it’s not the provider’s job on the line – it’s yours.
Availability and reliability should be a part of any cloud agreement, but should rely on more than mere promises. Gaining a deep understanding of how the provider operates and what resources they bring to bear is the best way to ensure that your needs will be met over the long term.
Arthur Cole writes about infrastructure for IT Business Edge. Cole has been covering the high-tech media and computing industries for more than 20 years, having served as editor of TV Technology, Video Technology News, Internet News and Multimedia Weekly. His contributions have appeared in Communications Today and Enterprise Networking Planet and as web content for numerous high-tech clients like TwinStrata, Carpathia and NetMagic.