The enterprise seems pretty set on the hybrid cloud as the preferred architecture for scale-out virtual infrastructure.https://o1.qnsr.com/log/p.gif?;n=203;c=204663295;s=11915;x=7936;f=201904081034270;u=j;z=TIMESTAMP;a=20410779;e=iThis is not a slam dunk, however, because while hybrids do provide high degrees of flexibility and control over the data environment, they also introduce greater complexity and thornier integration challenges than all-public and all-private solutions. But since we are talking about software-defined infrastructure, the hope is that sophisticated operating systems and middleware solutions will mask much of this complexity, leaving the enterprise free to engage in higher-level efforts to enhance the value of data.
So far, so good. But the next step is determining what kind of management system is appropriate for the enterprise business model. What sorts of tools are needed? Where should it reside? Should it be proprietary or open source? And how can any one system be expected to corral not only the multitude of vendor solutions in the legacy data center, but everything in the cloud as well?
To be sure, there will be no shortage of traditional vendor solutions. NetApp’s new ONTAP 9 software aims to integrate legacy and emerging technologies with Flash storage and software-defined architectures to create a unified data fabric that spans on-premises and cloud resources. As a Flash-optimized solution provider, NetApp can offer an all-Flash array featuring 15 TB SSDs and is even offering a guaranteed 4:1 efficiency reduction through its FlashAdvantage 3-4-5 program.
Meanwhile, HPE is out with the new Cloud Suite as part of its OpenStack-based Helion platform aimed at fostering multi-vendor cloud orchestration, automation and management. The company described it to CRN’s Steven Burke as the “secret sauce” that will support distributed applications and management functionality across diverse resource sets. In part, this is due to the suite’s tiered set of offerings, which allow users to rapidly deploy a base-layer management stack, which can then be supplemented by higher-order PaaS and DevOps tools and, finally, a full cloud brokering and analytics-driven automation solution. (Disclosure: I provide content services to HPE.)
And then there are open source projects like Apache Mesos that are drawing vendor support in the form of both financing and technology support. HPE and Microsoft led a round of financing that funneled $73.5 million to Mesosphere, which is leveraging Mesos as a hybrid data center operating system (DCOS). And this comes despite the fact that the HPE Helios system and Microsoft Azure provide their own management stacks, like the Cloud Suite, that link local and distributed cloud resources.
Also new to the scene is NephoScale, developer of the NephOS cloud operating system based on the OpenStack Liberty release that stresses rapid deployment and easy management of hybrid architectures. Besides leveraging overlay and underlay technologies for streamlined cloud building, NephOS features full-stack auto-installation, automated asset management, integrated SDN and NFV capabilities, and both virtual and bare-metal provisioning – all focused on getting the hybrid cloud up and running in less than 24 hours.
And if all this isn’t enough, it turns out that cloud providers themselves are creating their own hybrid management systems to better support client workloads. According to Forrester, many of these solutions match or exceed the capabilities of vendors and open source projects because they were designed specifically for the cloud infrastructure that the enterprise has provisioned. The drawback, of course, is that they generally don’t extend across multi-provider cloud architectures, increasing the risk of isolating workloads in cloud-based silos.
One thing is certain, however: hybrid cloud architectures will be so complex that much of the daily activity that currently occupies IT’s time will be automated. But this does not mean that once it goes live the hybrid cloud is on auto-pilot. There will still be plenty of hands-on work to, first, build and integrate the management stack, and then fine-tune the policies that will govern its operation.
And the fun part is, with a software-defined data ecosystem, the possibilities are no longer limited to what the hardware can support, but to what the human mind can imagine.
Arthur Cole writes about infrastructure for IT Business Edge. Cole has been covering the high-tech media and computing industries for more than 20 years, having served as editor of TV Technology, Video Technology News, Internet News and Multimedia Weekly. His contributions have appeared in Communications Today and Enterprise Networking Planet and as web content for numerous high-tech clients like TwinStrata and Carpathia. Follow Art on Twitter @acole602.