Streamlining Long-Distance Cloud Infrastructure

Arthur Cole

Of all the advantages the cloud brings to modern enterprise environments, the ability to disperse data and data infrastructure over great distances is probably the most vital.

This is especially true for backup and recovery purposes, as particularly devastating disruptions usually affect large areas that have a habit of putting nearby co-location services out of commission as well. Carl Weinschenk, my colleague here at ITBE, correctly pointed out recently that despite some of the more notable cloud outages of late, the fact remains that the sheer size of the cloud is the best guarantee against service disruptions on the market right now. If service is down in Minneapolis, loads can be shifted to Chicago or Los Angeles, or Mumbai.

Of course, spreading infrastructure across continents is one thing, uniting them under a seamless architecture is quite another. That's why development of multi-cloud coordination software has taken on such urgency among leading infrastructure firms. HP, for example, recently unveiled three new tools designed to break down geographic barriers between clouds. The Ethernet Virtual Interconnect (EVI) provides a network overlay between disparate data centers while the Multitenant Device Context (MDC) system creates a common security footprint for applications traveling between clouds. Meanwhile, the StoreVirtual appliance creates pooled storage architectures for VMware and Hyper-V environments across various x86 environments.

For many enterprises, one of the top priorities following a major outage is putting databases back in operation. Spreading database management across disparate regions, therefore, should help, as TransLattice is looking to show with its TransLattice Elastic Database. The company says it can foster high availability, broad scalability and better overall performance, at less cost, through a fault-tolerant, multi-node fabric that can be administered anywhere on the network. Among the key benefits is the ability to post mission-critical data on the edge where it can be more easily accessed by workers, customers and partners, as well as the development of nodes within virtual machines or cloud instances with streamlined policies that reduce the need for data federation and systems management.

Geographic distribution is also a good way to handle the increasingly large data sets that are confronting modern enterprises. While it is certainly possible to provision the resources needed to handle Big Data, the problem remains the preservation of access and the seamless connectivity necessary for a working environment. That's where companies like Cleversafe come in. The firm has devised the 3000 series object storage appliance that is said to support up to 10 exabytes of distributed storage with throughput of 1 TBps. The system features management software that disperses objects across various storage nodes, ensuring availability in the event of a node failure through advanced algorithms that can reconstruct data from partial objects.

By nature, the cloud involves the dispersal of data across distances great and small. In fact, many enterprises may already have part of their infrastructure in distant lands and not even know it. Most cloud operators use other clouds as their own backup, and these may or may not be in the general vicinity of the enterprise customer.

Ultimately, then, it will be the cloud operator who has to worry about things like the distance data needs to travel in order to be useful, while customers need only concern themselves with performance. But it never hurts to be aware of what is happening with your data, and the make-up of the environment you've entrusted should be open to careful and continual evaluation.

Add Comment      Leave a comment on this blog post

Post a comment





(Maximum characters: 1200). You have 1200 characters left.



Subscribe to our Newsletters

Sign up now and get the best business technology insights direct to your inbox.