The cloud is still a nascent technology for much of the enterprise industry, although we are quickly approaching the point at which heady enthusiasm is starting to give way to practical reality.
One of those realities is the fact that IT is no longer in complete control of the data environment. Even in the private cloud, resources can be compiled, utilized and abandoned, or not, without the active participation of data center management. And that leads to a number of thorny issues regarding resource allocation, data governance and policy enforcement.
One of the most disturbing trends of late is the rise of the rogue cloud. Individuals or business units have an unprecedented ability to provision data environments on their own, often much more quickly and at less cost than going through IT. And this is likely to be a much bigger problem than many data managers realize considering they are often kept out of the cloud provisioning loop entirely. According to cloud security firm Symform, nearly two-thirds of organizations that report not being on the cloud say they allow employees or teams to spin up their own cloud services — and a third of that group say they allow company data to be used in cloud applications. If these are your policies, it stands to reason that you are on the cloud whether you realize it or not.
In a way, this is a lot like the virtual sprawl of a decade ago, which ultimately led to a new class of management and control systems to ensure that VMs were being provisioned and decommissioned with some kind of oversight. MTM Technologies’ Bill Kleyman says rogue cloud operations are one facet of the new “cloud sprawl,” which will require a similar shift in management systems to accommodate the highly diverse nature of the cloud. Key attributes of this new regime should be agnostic management tools that can operate across a wide variety of platforms, improved VM visibility and management so IT can maintain control wherever they go, and a renewed focus on control pilots and proofs of concept (POCs) to enforce a clear separation between testing/development and full production environments.
And these are just the means to control the things we know about the cloud. Even more problematic are all the things we don’t know, according to Backupify CEO Rob May. Even with simple SaaS operations, hidden threats like zombie accounts, which are abandoned but never decommissioned on a far-off cloud, or rogue users who have access to your cloud data but are not fully authorized to use it, pose a significant threat. Worst of all, though, is the dreaded Black Swan, which is an event so unprecedented that it is virtually impossible to prevent.
Scary stuff, indeed, although companies that specialize in backup services tend to highlight all the things that can go wrong with advanced data architectures. For some, however, the shift to the cloud should be accompanied by a shift in management strategy — away from resources and infrastructure toward services and application performance. Logicalis recently listed five reasons why enterprises need to embrace IT Service Management (ITSM) as an integral component of their cloud strategies. The company maintains that ITSM will improve IT efficiency, better align services with business objectives, bolster IT automation and change management and enhance user self-sufficiency. The cloud itself may be beyond your control, but the services you use and the data they carry are not.
The cloud represents such a fundamental shift in information technology that it shouldn’t come as a big surprise that the old ways of managing and organizing IT infrastructure no longer apply. The past year has been largely about testing and evaluating the cloud and its capabilities. Now that many organizations are poised to begin real-world production deployments, the time is ripe for a serious discussion on how to keep this free-flowing environment under control.
If you wait until cloud operations are in full swing, it might be too late.