The stated goal of nearly every enterprise these days is to build a broadly scalable, highly dynamic data infrastructure using cloud-based resources on both internal and external infrastructure. This hybrid cloud is intended to be more responsive to user needs, engender greater productivity and be cheaper to build and maintain to boot.
Of the many challenges in this quest for digital nirvana, one of the biggest is how to overcome the latency issues that arise with such broadly distributed architecture.
According to author April Reeve, latency in the cloud is the product of three factors: the speed of the broadband network, the distances between users and data, and the extra security hops that are necessary in external architectures. When we’re talking about a few extra microseconds to pull a handful of files from cloud storage, the impact should be barely noticeable. But once we venture into broad data integration, with multiple users accessing volumes of Big Data and rich media information from perhaps dozens of disparate resources into a single, collaborative instance, the time factor starts to become noticeable.
Keeping track of data in the cloud is also a problem. With virtual resources constantly coming and going, it is very easy for files to get lost in the shuffle, and the longer it takes to find them the slower the productivity. Salesforce, for one, has responded with a new file-sharing feature called Salesforce Files that creates a full directory of information stored in popular services like Box, Dropbox and Sharepoint. In essence, it is an upgrade to the company’s Chatterbox system, but geared to work with any user device without passing files through Salesforce so as not to complicate security protocols. The system is only available to select beta customers at the moment, but is expected to be in general release by February.
Of course the enterprise-facing clouds should provide a higher degree of service, including safeguards against latency. GoGrid, for example, has launched the new Cloud Bridge service, which provides a dedicated connection between customers and the GoGrid cloud hosted at Equinix’ International Business Exchange (IBX) data centers in Europe and the U.S. Not only does it provide a high-speed pathway between internal and external infrastructure, but it eliminates the need for a VPN or a firewall, improving latency, availability and a host of other issues that surround hosted infrastructure. The system is available with dual-port redundancy, which allows GoGrid to provide a 100 percent uptime guarantee.
Ultimately, reducing latency in the cloud will require the enterprise to sacrifice one of the three things, according to tech consultant Peter Reid: partitioning, availability or consistency. In other words, you either cease partitioning active data around the globe or you get used to the fact that the data that you need will either be unavailable or wrong once you get it. Oddly, he claims the third option is the most favorable considering we routinely encounter misleading or incomplete data on the Web all the time.
In a perfect world, there would be no such thing as latency, but then again there would be no war, disease, famine or lame late-night TV monologues either. Knowledge workers these days are under constant pressure to be always on, always available and always productive, and latency simply does not fit into the equation. But even light can only travel so fast. The question we need to ask ourselves is whether the scale of a fully global networking environment is worth a few extra moments between send and receive.