The cloud has proven itself to be an effective, efficient means to scale resources as the enterprise tries to cope with rising data loads and increasingly complex infrastructure challenges. But is it ready for prime time? Are we at a tipping point for the widespread migration of mission-critical applications to public cloud services?
This is more than just an academic question given that many organizations have spent decades building rock-solid safety and availability into traditional infrastructure in order to keep core business activities afloat. Turning those responsibilities over to the cloud is not just a standard development in the evolution of data environments but a giant leap of faith that places crucial aspects of your business on still largely unproven infrastructure.
Vendor-driven surveys should always be taken with a grain of salt, but if the latest report from Virtustream is even half-right, it seems many top data executives are ready to make that leap. The company reports that nearly 70 percent of respondents to a recent survey say they are planning to move mission-critical apps to the cloud within the next year. Although security, risk and loss of control still rank among the top concerns, the low cost of the cloud compared to traditional infrastructure is causing many organizations to put their fears aside. ERP applications have emerged as the leading candidates for groups looking to expand beyond mere cloud-based storage and backup.
Of course, not everyone views mission-critical applications in the same light. According to Rich Quick, founder of professional training startup headroom.io, some apps require no less than 100 percent availability while others can withstand a few hours of downtime per year without the sky falling down. Of course, no one gets perfect uptime, not even on legacy infrastructure, but the fact remains that the cloud is nowhere near the five nines (99.999 percent) availability that is considered the gold standard for mission-critical apps. Two nines (99 percent) is more likely.
It is also important to note that simply putting apps on the cloud is only the first step. Far trickier to do is integrate them with legacy applications and infrastructure, as well as with resources that exist on other clouds. As PandoDaily’s Fritz Nelson notes, new REST APIs have made this task easier, but many organizations still maintain in-house staff to oversee the process or risk recreating the very silo-style architectures that cloud and virtualization were supposed to eliminate. New systems like SnapLogic, however, are making headway by providing non-programmers with simple drag-and-drop tools to integrate applications across disparate infrastructure.
In fact, bridging the divide between cloud-native and traditional enterprise applications is emerging as a growing trend in the automation and orchestration field these days. Citrix’s new CloudPlatform 4.2, for example, offers a single Apache CloudStack-based control plane that allows organizations to gradually increase their reliance on cloud services as needs dictate. Citrix execs say this places greater focus on the application environment, rather than infrastructure, allowing organizations to provision resources according to actual user requirements as opposed to the traditional method of limiting activity to peak utilization capabilities.
Even if the enterprise starts to view the cloud as a mission-critical platform, there is no reason to expect a mad rush to put everything they own on non-traditional infrastructure. More private clouds springing up gives us every reason to believe that mission-critical apps will stay within the confines of the data center for the most part, with data burst capability at the ready, should activity spike.
And it is almost certain that every organization will have its own crown jewels that are simply too valuable to be trusted to anything but the most reliable infrastructure. But that’s a call that all CIOs will have to make on their own.