Having a multi-cloud strategy these days is like having a multi-server strategy in ages past: Why trust your workloads to a single point of failure when you can move them about at will?
But while distributing resources over multiple points fosters redundancy and eliminates vendor lock-in on one level, the enterprise should be aware that this invariable pushes these same risks to another.
It’s no surprise that upwards of 85 percent of organizations have implemented a multi-cloud strategy by now, says Datacenter Journal’s Kevin Liebl. Following major outages at AWS and Azure earlier this year, the vulnerabilities of placing all data in one basket have become clear. Using multiple clouds provides clear advantages for disaster recovery, data migration, workload optimization, and a host of other functions. By following a few simple implementation guidelines, such as building multi-cloud-native capabilities directly into storage and architecting the entire environment around redundancy and rapid data movement, organizations can maintain high degrees of resiliency even in the face of major disruptions.
Some people are already calling this Cloud 2.0. As HyperGrid co-founder Kelly Murphy explains, this is distinguishable from Cloud 1.0 in that the enterprise is no longer limited to a single provider for all of its cloud needs. Instead, with a robust cloud management platform in place, the enterprise is in the driver’s seat when it comes to where and how they push data off-premises. Not only does this help lower costs, it allows for a more fine-grained data environment, in which resource configurations can be more closely matched to the needs of applications. And this trend will only accelerate as more of the enterprise workload is handled by containerized applications and microservices.
But herein lies the rub: If dependency on a single cloud provider is bad, what about dependency on a single cloud management platform? Companies like HyperGrid and Nutanix are well on the way to becoming the next dominant force in IT infrastructure precisely because they act as the operating system for distributed, multi-cloud architectures. Using tools like one-click provisioning and seamless integration between public and private infrastructure, these platforms are making it easier than ever to create hybrid cloud architectures that IT techs can manage as easily, if not more so, than a fully localized environment.
This is a powerful draw for anyone with complex workloads to manage, which will likely become increasingly common as the Internet of Things fosters reliance on device-driven workflows and advanced analytics. Data integration firm Talend is already targeting this opportunity with a new version of its Data Fabric software that aims for seamless integration of top cloud platforms like AWS, Azure and Cloudera. The system provides a 20-fold increase in migration speeds and can even repurpose applications designed for one cloud for use on another, allowing organizations to cut maintenance and development costs and gain new levels of flexibility and product innovation.
Of course, to function in a data-driven environment, the enterprise will have to build dependencies on key development and resource management platforms. With a few simple precautions, however, the cloud management stack can be made just as resilient as any other system.
But the idea of building redundancy and eliminating vendor lock-in is a tricky one. It’s the nature of the business that vendors are eager to help break the enterprise’s dependency on other platforms, while enhancing it on their own.
Arthur Cole writes about infrastructure for IT Business Edge. Cole has been covering the high-tech media and computing industries for more than 20 years, having served as editor of TV Technology, Video Technology News, Internet News and Multimedia Weekly. His contributions have appeared in Communications Today and Enterprise Networking Planet and as web content for numerous high-tech clients like TwinStrata and Carpathia. Follow Art on Twitter @acole602.