The Developing Art of Cloud Bursting

Arthur Cole
Slide Show

How the Cloud Is Changing the Way We Work

Of all the wonders that the cloud brings to the enterprise data environment, none is more intriguing than data bursting.

For decades, organizations that wanted to ensure they had the resources to handle spikes in traffic had to build it themselves and then watch it sit idle for most of the time or at best shift loads among existing infrastructure to extend resource lifecycles. And woe to the CIO who devised an offload infrastructure that proved inadequate to the task.

In the cloud, however, you have a ready-made set of resources available at a moment’s notice, and you have to pay only for what you use. But the devil, as they say, is in the details, and even at this stage of the cloud transformation, it seems that providing instant, or even near-instant, burst capabilities is not as easy as it sounds.

For one thing, cloud infrastructure is different from most legacy data center infrastructure, and that means applications that run smoothly at home do not necessarily port directly over to the cloud. As well, there are networking issues to work out, particularly when it comes to replicating LAN-like service on the WAN.


This is why many cloud platforms are starting to focus more directly on burst capabilities. A key example is French developer Siaras, which recently unveiled the OpenStack-based cloudScape system designed to provide seamless federation between local and cloud infrastructure. The idea is to allow carrier network providers to establish WAN-as-a-Service (WANaaS) connectivity across multi-cloud environments. The system features a mix of on-demand virtual WAN solutions, as well as automated orchestration, Traffic-Engineering-as-a-Service (TEaaS) and virtual points of presence that can be established at carrier or third-party cloud facilities, allowing virtual data centers to be created on-demand.

As well, a company called Bracket Computing has developed a new approach to standard IaaS service by implementing a new layer of abstraction – a metavisor – between the OS and underlying IaaS hardware. This is used to wrap applications and data within a “computing cell” that can provide performance guarantees, data encryption and other service across multiple clouds. In this way, hybrid cloud applications can be burst across internal and external resources while maintaining full isolation from underlying physical infrastructure.

Data Analytics

Cloud bursting isn’t just the provenance of the start-up community, however. NetApp, for one, is working on a number of techniques designed to bridge the gap between block-level legacy storage environments and the file-level solutions found in public clouds like AWS. As Enterprise Tech explained recently, the next phase of the company’s ONTAP solution is to port legacy storage OS to the cloud to essentially recreate the home solution on third-party infrastructure. For the moment, the system is aimed at application dev/test functions by housing local FAS arrays using SnapMirror and the OnCommand management suite. However, it can ultimately serve as a full replacement for the NetApp Private Storage service that is currently running in AWS, Microsoft Azure and IBM SoftLayer.

It is important to keep in mind that bursting is about more than just finding a home for excess data loads, says Rackware CEO Sash Sunkara. In fact, it plays a vital role in data protection and availability as well, not to mention improved performance across the entire data ecosystem. When applications and processes fight over dwindling resources, the result is increased latency and diminished performance and productivity, both for critical and non-critical workloads alike. A robust bursting environment, even one that is not on-demand, is crucial to ensuring that workloads will have the resources they need to accomplish their tasks. This ultimately allows the enterprise to scale operations to the needs of the business, rather than the limits of its infrastructure.

As I mentioned above, however, bursting requires extensive coordination across multiple layers of the data stack, and this can be best accomplished by the broad transition of legacy infrastructure and architecture to cloud-like functionality. This will undoubtedly take time, but those who can accomplish it quickly will find themselves at a distinct advantage when it comes to leveraging the cloud for more than simple cost reduction.

Arthur Cole writes about infrastructure for IT Business Edge. Cole has been covering the high-tech media and computing industries for more than 20 years, having served as editor of TV Technology, Video Technology News, Internet News and Multimedia Weekly. His contributions have appeared in Communications Today and Enterprise Networking Planet and as web content for numerous high-tech clients like TwinStrata, Carpathia and NetMagic.



Add Comment      Leave a comment on this blog post

Post a comment

 

 

 

 


(Maximum characters: 1200). You have 1200 characters left.

 

null
null

 

Subscribe to our Newsletters

Sign up now and get the best business technology insights direct to your inbox.