New Approaches to Storage Sprawl

Arthur Cole

Talk about the problem of virtual sprawl and attention quickly centers on the plethora of unused VMs quietly humming along, consuming valuable server resources.


But the data center being the organic creature it is, the rising number of VMs plus exponentially increased data demands are having a real-world impact on the storage farm, as well. And while many storage management systems are turning to techniques like compression and deduplication to reduce storage requirements, the fact is that redundant data is still rife on most SANs, causing IT managers to continually provision storage unnecessarily.


Now, a number of firms are starting to look at storage infrastructure in new ways. And it is becoming increasingly clear that traditional storage which may have been optimal for the physical infrastructure of the past can use a serious revamp for the virtual/cloud environments of the future.


One such company is startup Gridstore, which recently unveiled the NASg storage system designed to more closely match the storage provisioning process with the needs of virtual servers and networking. The package works through a NASg Storage Block, a storage node consisting of a pair of Atom processors tied to an Ethernet port and a 1 TB or 2 TB SATA drive. Controlled through the Microsoft Management Console, the node looks and acts like a standard Windows drive, but can be infinitely scaled and pooled together to boost bandwidth. File sharing is through CIFS, with iSCSI and NFS expected next year.


Another start-up looking at this problem is Virsto, which is scheduled to launch its flagship management system in a few months. The company is on the hunt for beta testers, but hasn't yet revealed what it's cooking up. However, CEO Mark Davis has been critical of standard storage I/O and snapshot/dedupe approaches to combat multiple VMs' impact on storage infrastructure.


Over at Isilon, the focus is on drive density to ensure that capacity can be added quickly and with a relatively low footprint. The company's IQ 7200X and 72NL scale-out NAS devices offers up to 10 PB in a single volume using Hitachi's 2 TB Ultrastar A7K2000 SATA drive. That nearly doubles the density of existing NAS systems while cutting power and cooling costs in half.


For traditional storage management firms, approaches like thin provisioning are offering new ways to ensure efficient use of available resources. Symantec, for example, has added a Thin Reclamation API to its Veritas Storage Foundation platform, which sets up automated data and tier management routines to consolidate data across multiple storage devices. Tech writer Heather Clancy reports that a 200 TB storage consolidation can shave upwards of $50,000 off the energy bill.


Storage costs may be historically low at the moment, but that should be little comfort to those already overseeing tight capital budgets for the coming year. With the focus on new deployments rapidly shifting from power and capacity to efficiency, new approaches to limit storage requirements in virtual environments are certainly welcome.



Add Comment      Leave a comment on this blog post

Post a comment

 

 

 

 


(Maximum characters: 1200). You have 1200 characters left.

 

null
null

 

Subscribe to our Newsletters

Sign up now and get the best business technology insights direct to your inbox.