An Object(ive) Look at Cloud Storage

Arthur Cole

Cloud-based storage may not be a convenient option for most enterprises in the very near future. It may be a necessity.


That conclusion is rapidly coming to the forefront across the industry as both data loads and the needs of virtual infrastructure push requirements far beyond all but the largest of storage systems' abilities.


Take this little factoid highlighted by StorMagic's Mike Stolz on virtual-strategy.com: for every $1 spent on virtualization software, and additional $2.81 is spent on storage. The primary driver for these numbers is the need to convert from DAS to shared storage to gain the flexibility needed to manage data loads between virtual machines. Many organizations are already turning to virtual storage appliances to accommodate these needs, but even then face a finite amount of internal capacity to play with.



To handle really large data sets, many organizations are turning to object-based storage virtualization, which gives you the ability to pool multiple storage resources across disparate geographic locations on the cloud. NetApp made the most recent move in this direction with its purchase of Bycast, a Canadian company that came out with the StorageGRID system a few years ago. The idea is to add a virtual layer on top of commodity hardware that uses common protocols like CIFS, NFS and HTTP in such a way that both data and metadata are stored under a common address. In that way, the entire data set can be recalled from multiple sources even if it stretches into the petabyte range.


The concept isn't entirely new. Companies like EMC and IBM already have similar capabilities in the channel. But it does help NetApp step up to the global cloud storage plate at a time when the need for such services is expected to skyrocket.


Another company looking to get in on the action is Dell. Its recently unveiled Intelligent Data Management portfolio features the DX Object Storage Solution designed to provide management for billions of cloud-based files and content. The system uses what Dell calls a "peer-scaling architecture" that does away with LUNs and RAID groups in favor of an integrated hardware/software combination that provides for automated retention and deletion and write-once/read-many functionality.


Not all petabyte-level cloud solutions rely on object-based storage, however. Appistry is close to releasing a REST-based virtual file system as part of its CloudIQ system that the company says overcomes the bottlenecks found in traditional storage architectures. The system processes applications and data together, allowing the CloudIQ system to distribute workloads across multiple units and offering a level of scalability that native file systems, such as HDFS, can't match.


Despite the steadily decreasing cost of storage, equipping and provisioning traditional storage infrastructure rapidly is becoming too time-consuming and too expensive for even medium-sized organizations' data needs. The cloud offers a way out, with the ability to provision and scale up rapidly -- and vice versa should the load diminish for one reason or another.


SAN and NAS architectures will most definitely be around for a while, but as data and applications become increasingly uncoupled from underlying hardware, it might not be long before they become the luxury and the cloud takes over as the standard for storage.



Add Comment      Leave a comment on this blog post

Post a comment

 

 

 

 


(Maximum characters: 1200). You have 1200 characters left.

 

null
null

 

Subscribe to our Newsletters

Sign up now and get the best business technology insights direct to your inbox.