The enterprise is heavily invested in legacy infrastructure but is also rapidly ramping up its cloud, both at home in the data center and on third-party resources. Quite naturally, this points to the hybrid cloud as the only logical way to leverage both of these investments to their full potential.
Aside from that broad goal, organizations are faced with an endless stream of options when it comes to devising the right hybrid cloud for the right workload, or indeed, merely selecting the kinds of workloads that are best suited to public, private, hybrid and legacy virtual and bare-metal infrastructure. It’s a far cry from the old days when there was only one basic set-up to choose from.
According to tech consultant Andrea Knoblauch, the rationale for the hybrid cloud used to be simple: If you wanted to employ advanced computing architectures but were hesitant to go all-in on the public cloud, then hybrid was the way to go. Increasingly, though, hybrid architectures are emerging as a go-to solution in their own right, offering levels of flexibility and integration that afford unique usage models not available with any other construct.
This is partly why top vendors from HP and Cisco to VMware and Microsoft are touting hybrid clouds even as they maintain presences in both all-public and all-private infrastructure. Some may argue that this is merely an attempt at protecting legacy revenue streams, but the continued R&D targeting hybrids casts doubt on that theory. At VMworld, for example, hybrid architectures saw a raft of improvements, such as the Site Recovery Manager and Disaster Recovery Service additions to the vCloud Air platform, plus object storage capabilities and an on-demand SQL Server offering that allows organizations to extend internal resources to the public cloud, and vice versa.
The problem with most hybrid platforms, however, is that they still tend to treat resources as monolithic blocks of compute, storage and networking, making it hard to optimize deployments for key workloads. A start-up called Velostrata is looking to break this mold by decoupling compute and storage, which the company says allows for more efficient scalability across complex, dynamic architectures. In this way, organizations can more effectively implement security and other requirements to hybrid workloads and simplify the process of shifting from public back to private resources by not having to provision entire compute/storage clusters at each transition. There is also the added benefit of lowering costs by spinning up only what is needed on the public cloud.
The cost issue is usually central to any cloud architecture, but it is not always easy to tell whether an internal or external cloud is really the better option, says IT Business Edge’s Mike Vizard. This is why companies like Cloud Cruiser are working up new templates to their financial management services. The new CloudSmart-Now system collects cost data from AWS, Azure, OpenStack and other platforms and runs it through a tailored analytics engine to give the enterprise a more accurate picture of where the money is going in the cloud and whether it is producing an adequate return.
All of these options for the hybrid cloud represent a double-edged sword, of course. On the one hand, resources and architectures can be optimized in myriad ways for specific workloads. On the other, it will become increasingly difficult to determine the right way to support emerging applications and services.
Arthur Cole writes about infrastructure for IT Business Edge. Cole has been covering the high-tech media and computing industries for more than 20 years, having served as editor of TV Technology, Video Technology News, Internet News and Multimedia Weekly. His contributions have appeared in Communications Today and Enterprise Networking Planet and as web content for numerous high-tech clients like TwinStrata and Carpathia. Follow Art on Twitter @acole602.