Taking Aim at Cloud Storage Latency

Arthur Cole

All of the recent trends hitting the data center -- from virtualization and consolidation to online services and the cloud -- have one thing in common. They all separate users and applications from their data.


That is not necessarily a bad thing, provided there is a robust, high-speed infrastructure in place that reduces latency to acceptable levels. Of course, nothing is as fast or responsive as having data on local storage, but few organizations are willing to trade instantaneous access for the cost benefits, scalability and flexibility of centralized, virtual infrastructure.


So the race is on not only to build the necessary network infrastructure to enable high-speed data (and increasingly, desktop and operating environment) transmission, but to engineer a new generation of storage infrastructure to go with it.


Cloud providers are probably more eager than most to boost storage performance, if only to be able to claim service equal to or even greater than traditional network storage approaches. Rackspace Hosting, for example, recently hooked up with newcomer Nasuni to bring its Filer system in as a way to simplify many of the management issues that arise with cloud storage. The system provides for automatic provisioning and a high-performance cache system for improved response, and has the added benefit of accommodating multiple cloud providers, so users avoid vendor lock-in with the ability to shift loads between various services.


Also addressing this field is startup Panzura, which uses a combination of application logic and advanced storage management in its Application Network Storage (ANS) system. Debuting on the Panzura Application Cloud Controller, the system aims to provide a faster and cheaper solution than traditional tier 1 storage, complete with advanced features like deep packet inspection and deduplication. The system offers direct support for several leading applications, including SharePoint, CIFS and NFS, providing services even when offline via an on-board SSD module.


For some cloud users, the ability to access large amounts of structured or unstructured data is one of the chief benefits, but that ability needs to be maintained even as data gets shifted to repositories that could exist half a world away. NetApp hopes to fulfill this need through its new StorageGRID solution, built on object-oriented technology acquired last year from Bycast Inc. The company says it can provide quick search and locate for petabyte-scale global cloud infrastructures, maintaining always-on availability for such heavy data users as health care, digital media and cloud organizations.


But even if you're content with one of the public cloud services like Google Storage, there are new ways to improve responsiveness. Gladinet recently unveiled new developer support for Google in its Cloud Desktop and Cloud AFS platforms, bringing high-bandwidth, REST-based connectivity to desktops and file servers. The move allows users to map Google Storage as just another network drive or add it directly to the file server, among other benefits.


It seems then that both virtual and cloud developments are handling data center needs in sequence. First up was scalability, which put an end to the perennial hunt for additional resources. Now comes throughput, with the focus on building faster response times and improved availability. At some point, we should be close to providing a seamless environment in which users have near-instant access to any and all data, regardless of where it is housed.



Add Comment      Leave a comment on this blog post

Post a comment

 

 

 

 


(Maximum characters: 1200). You have 1200 characters left.

 

null
null

 

Subscribe to our Newsletters

Sign up now and get the best business technology insights direct to your inbox.