The hybrid cloud is a hot commodity right now. Enterprises across the board are rapidly configuring their internal and external architectures in a bid to gain unlimited scalability and broad flexibility, to prepare for expected, and unexpected, data volumes.
But while it’s tempting to view hybrids as the culmination of five decades or more of IT development, they still pose quite an architectural challenge, particularly on the integration front, and they are unlikely to provide the all-encompassing solution set that enables the anytime/anywhere data service that many people feel they are entitled to.
Nonetheless, onward, ever onward, we must go in the IT business, so hybrid clouds are most certainly in our future. According to PMG, nearly 70 percent of IT pros think hybrids are coming their way, with drivers ranging from improved application deployment, increasing the value of IT infrastructure and bolstering service fulfillment for an increasingly diversified and distributed workforce. Somewhat encouraging, more than half say they are worried about cloud sprawl, an indication that while interest is peaking, many IT professionals recognize that there are right ways and wrong ways to implement the technology.
And that is the heart of the matter: how to leverage the hybrid cloud to gain maximum benefit to the enterprise. According to CloudVelocity’s Anand Iyengar and Gregory Ness, the mistake many IT executives make is that they view cloud technology in general as a means to solve today’s IT challenges. In truth, the cloud represents an entirely new data paradigm that will propel the enterprise to new levels of agility and scalability. In short, it’s not that the cloud will solve our problems as much as it will make them obsolete – driven out by new generations of hybrid automation stacks that oversee porting, conversion, synchronization and other facets of the hybrid infrastructure.
It also means you can finally say goodbye to the static silo-based architectures that have hampered IT flexibility for a generation, says Ness in a related post. That means organizations will be able to shift their IT budgets away from vendor-driven hardware and software environments and more toward service- and capability-oriented architectures. In this fashion, enterprises will be able to tailor their environments more to their specific needs while at the same time more closely aligning infrastructure to fluctuating data requirements. As many cloud purveyors are fond of saying: Own the base, rent the spike.
A key question in all of this is how open does the hybrid cloud need to be? Ideally, a hybrid cloud environment should enable broad interoperability on the physical, virtual and even application layers, which is the driving goal behind OpenStack, CloudStack, Eucalyptus and other open initiatives. But the very fact that there are competing solutions for the open cloud indicates that universal access is highly unlikely, so it’s not like hybrid infrastructure will be able to extend everywhere. And as 451 Research’s Jay Lyman points out, open infrastructure should encompass not only the cloud, but software, standards, APIs and data as well. With development on all these fronts is still evolving, integration and interoperability testing will remain a fact of life in the hybrid cloud.
Nonetheless, the hybrid cloud has made a lot of progress in just a few short years. And given the enterprise’s desire to leverage existing infrastructure as much as possible, it provides strong motivation to continue investment into legacy systems while at the same time embracing the cost benefits and flexibility that external cloud architecture provides.
It’s not a perfect win-win situation, but it’s about as close as we can get in the increasingly complex IT universe.