Trusting the Cloud for Mission-Critical Workloads

    Slide Show

    5 VM Routing Mistakes Made in Private Clouds

    The cloud is a common facet of virtually every enterprise on the planet these days, but the overriding perception is that it should be kept away from mission-critical functions.

    So it came as a surprise late last year when Verizon issued a report on the state of the cloud market indicating that 87 percent of enterprises are running mission-critical apps in the cloud, up from 60 percent two years ago. More than half of this group uses up to four cloud providers to support these functions, while a quarter are porting them over 10 or more. And the trend is particularly pronounced among start-ups, many of which are eschewing internal infrastructure for an all-cloud approach that drives high degrees of flexibility, if not entire transformations of existing business models.

    But is this wise? Does the cloud, even at this stage of its development, really have the chops to support critical workloads and applications? Or are early adopters merely setting themselves up for failure when their plans fall victim to poor reliability, availability and security?

    Perhaps not, according to a recent report from Gigaom. While concern over the cloud’s trustworthiness remains high, many organizations now have enough experience under their belts to more clearly assess both its strengths and weaknesses, and to formulate strategies to shore up the weaknesses. Security, of course, remains high on the list of concerns, but tools like access management and virtual private networks go a long way toward mitigating the risks. As well, SaaS, DBaaS and other service providers have made security and reliability top priorities and are being rewarded with increased flows of mission-critical workloads.

    It also helps when companies that are primarily responsible for legacy, in-house applications are forging ties to cloud infrastructure. A case in point is Microsoft, which already dominated apps like email in the pre-cloud era and is now linking not just Exchange but the entire Office suite to its Azure cloud. At the moment, the mission-critical cloud market is incredibly small and fractured, with Microsoft, Google, Amazon and others having single-digit shares at a time when 90 percent of the workload is still sitting on legacy systems, but it stands to reason that most organizations will want to build cloud capabilities into legacy platforms rather than recreate entirely new ones in the cloud, at least at first. But this link between Exchange and Azure could be a double-edged sword for Microsoft as revenue from high-margin, high-stability licensing models gives way to the more free-wheeling nature of the cloud.

    And it might not be that much longer before the open community starts to push mission-critical services as a means to propel the enterprise into broadly distributed hybrid cloud infrastructure. A Dutch services company called Schuberg Philis recently launched its own Mission Critical Cloud project by forking the CloudStack platform initially launched by Citrix. Forking an open source project is never a popular move, but the company says it had no choice due to the slow pace of development and disjointed reaction to external threats. Company execs say they hope their fork won’t deviate too much from CloudStack, but in the end they require stronger support for critical workloads, and this can only come about by building on the open cloud platform that provides a more stable, if less flexible, core framework than rival projects like OpenStack.

    The idea of specialized cloud infrastructure for mission-critical workloads is another example of the transformation from “the cloud” to “many clouds.” Now that many enterprises know what the cloud can do in terms of cost, flexibility, ease of use and management simplicity, organizations are starting to look for optimized solutions that give their apps and processes an edge. These can take the form of clouds built around industry verticals and compliance requirements, app-specific support for, say, ecommerce or backup, or increased reliability for the important stuff.

    This will put more pressure on the enterprise to ensure that the right workload gets to the right cloud, even as business managers and other users take it upon themselves to spin up their own resources. But if it all goes well, the result will be a strong, vibrant data infrastructure that spans internal and external resources, with broad portability across the physical, virtual and software-defined layers of the stack.

    It also points up the fact that even in fully automated, cloud-facing environments, IT will still have plenty to do.

    Arthur Cole writes about infrastructure for IT Business Edge. Cole has been covering the high-tech media and computing industries for more than 20 years, having served as editor of TV Technology, Video Technology News, Internet News and Multimedia Weekly. His contributions have appeared in Communications Today and Enterprise Networking Planet and as web content for numerous high-tech clients like TwinStrata and Carpathia. Follow Art on Twitter @acole602.

    Arthur Cole
    Arthur Cole
    With more than 20 years of experience in technology journalism, Arthur has written on the rise of everything from the first digital video editing platforms to virtualization, advanced cloud architectures and the Internet of Things. He is a regular contributor to IT Business Edge and Enterprise Networking Planet and provides blog posts and other web content to numerous company web sites in the high-tech and data communications industries.

    Get the Free Newsletter!

    Subscribe to Daily Tech Insider for top news, trends, and analysis.

    Latest Articles