Although detractors may decry it as a waste of money and resources, the enterprise seems to have settled on the hybrid cloud as the infrastructure play for the emerging digital services economy.
But far from being a monolithic construct, the hybrid cloud consists of many moving parts and there is great diversity in deployment configurations that can have a significant impact on the performance of key applications.
According to a recent report by Dimension Data, hybrid IT is becoming the standard enterprise model, but there is no single playbook regarding either configuration or operation. Indeed, the plethora of motivational factors in building the hybrid cloud indicates that architectures will vary greatly depending on whether the enterprise is interested in fulfilling new user demands, lowering costs, managing the IT workforce or streamlining internal infrastructure. About the only common factor among hybrid strategies at this point is that most organizations expect management issues like data migration, automation and security to be top challenges.
The key differentiators among hybrid cloud deployments will most likely reside in the API, according to CIO Review. Choosing the proper API for a given use case will dramatically affect functions like component and workflow management, operating parameters and state management. Some API models, for instance, are more appropriate for hybrids with a web-based front end tied to back-end transaction processing. Others are tailored to functions like cloud bursting or offloading analytics processes. In each case, the API should be geared toward the appropriate data formats and workflow characteristics.
The enterprise should also take care to build high availability and disaster recovery (HADR) as core elements of its hybrid model, says Michael Otey, president of SQL developer TECA Inc. Today’s data services require extremely high uptime and ubiquitous connectivity across a global footprint, but implementing this on hybrid infrastructure is different than on the local data center. One common mistake is to modernize heterogeneous environments without enhancing data protection capabilities, which can limit the availability and recovery capabilities of virtual machines. Key steps to take in a hybrid setting including stipulating HADR in the SLA, establishing regular replication schedules, minimizing WAN latency and continually testing the HADR solution.
But even in the course of normal operations, many enterprises are not taking advantage of the flexibility that hybrids bring to the table. One key problem, says Nutanix’ Andre Leibovici, is failing to establish data locality for applications being pushed out to users. In the hybrid cloud, data sets should follow applications, not the other way around, and the best way to do that is through live migration across an integrated compute/storage/network fabric controlled by a single management plane. In this way, the enterprise can move both applications and data to the same host in a dynamic fashion that supports the highest level of performance to the user base.
In all likelihood, the enterprise will employ multiple hybrid models in order to provide optimal application support. This will require a fair bit of coordination on the management stack and will undoubtedly call for greater levels of automation as data loads increase and the environment scales.
For IT, the irony will be that while direct responsibility of infrastructure diminishes, the challenges of multi-cloud, multi-architecture coordination will mount.
Arthur Cole writes about infrastructure for IT Business Edge. Cole has been covering the high-tech media and computing industries for more than 20 years, having served as editor of TV Technology, Video Technology News, Internet News and Multimedia Weekly. His contributions have appeared in Communications Today and Enterprise Networking Planet and as web content for numerous high-tech clients like TwinStrata and Carpathia. Follow Art on Twitter @acole602.