Overcoming NAS's Cloud Limitations

Arthur Cole

It may be true that NAS architectures are more suited to the cloud due to their broad scale-out capabilities, but let's face it: They still could use some improvement.


Particularly when it comes to critical applications and data, which, after all, is the holy grail when it comes to enterprise services, NAS suffers from significant cost, performance and continuity limitations -- at least if you intend on utilizing all the scalability and flexibility advantages the cloud promises.


It seems lately, however, that many of these issues are fading away, as each new generation of NAS technology becomes increasingly optimized for the cloud rather than traditional enterprise infrastructure.


DataCore Software, for example, recently unveiled SANsymphony-V, the latest version of its NAS software suite, which targets NAS's single-point-of-failure handicap by adding a mirroring function that places copies of the software under the NAS layer. This allows any storage device to be accessed for additional capacity, ensuring that both virtual and cloud applications will be able to access needed storage at any time. The package is optimized for the Microsoft Windows Server 2008 R2 platform, particularly those storing vSphere, Hyper-V or XenDesktop VMs as shared network file system (NFS) or common Internet file system (CIFS) files.


Enterprises are also finding out that expanding the cloud onto clustered NAS infrastructure does not guarantee them full control of their own data. As cloud resources continually reconfigure themselves, vital data could wind up copied, stored and preserved without your knowledge. Nasuni has targeted this little problem with the latest Nasuni Filer, which captures continual snapshots of the entire file system that, when combined with a proprietary cache system, enable enterprises to track where their data has gone. The snapshots can be used to recover lost data or to ensure that data is permanently deleted from the cloud once it is no longer useful. The snapshots are deduplicated and compressed to streamline storage needs.


Under the right architecture, this kind of snapshot capability can do more than just ensure data integrity and compliance, says tech consultant Howard Marks -- it can replace many of the expensive back-up and storage systems currently draining IT budgets. Since most gateways provide an unlimited number of snapshots in the cloud while reserving several TB of redundant cache for active data, they not only eliminate the performance penalties snapshots incur but they provide access to critical data much faster than a traditional backup architecture.


All true, but suppose you're stuck with a legacy infrastructure that consists of numerous single-vendor islands of NAS? How can they be unified to form the kind of integrated storage architecture required by the cloud? Avera Systems says it has an answer in the form of a new global namespace (GNS) system as part of its FXT appliance. The system provides a single GNS across all NAS storage servers that lets you build and manage logical storage sets regardless of their physical location. The company says the benefits are two-fold. First, NFS and CIFS clients gain a simplified and transparent data access point, and secondly, downtime is eliminated when data is moved across various storage servers.


As a file-based storage platform, NAS will always be the more robust solution when it comes to Internet-based cloud services. It doesn't have the ability to send entire blocks of data all at once like SAN does, but its flexibility and scalability advantages more than make up for that.


And if the expansion into NAS-based cloud storage is accompanied by storage unification within the data center, there should be no reason why future enterprises can't provide access to whichever storage environment is required at any given time.



Add Comment      Leave a comment on this blog post
Mar 30, 2011 8:11 AM Ben Golub Ben Golub  says:

Great article.

At Gluster (open source storage for public and private clouds), we think it is clear that Scale-Out NAS can be the future of both full data center virtualization and public cloud.

We are starting to see a  number of interesting trends that can help address the perceived limitations of public cloud storage that you discuss above.

First we are starting to see a growing number enterprises running their primary operations in the cloud (e.g. deploying their application across EC2 instances), rather than using the cloud simply as backup or disaster recovery. This obviously addresses one of the key concerns of public cloud (internet latency).

Second, we have recently found that by deploying Gluster as an Amazon Machine Image, Rightscale Template, etc. we can help address other concerns. First, by providing a POSIX compliant, scale-out platform, we can avoid the need for application rewrite traditionally associated with (object based) cloud storage.

Furthermore, by deploying a global namespace that distributes the workload across both compute (e.g. EC2) and storage (e.g. EBS) resources, we have seen that it is possible to effectively scale-out capacity, performance, and availability. We can both aggregate the collective throughput and capacity of large numbers of shared storage resources and reduce the impact of  performance variability.  Furthermore, it becomes trivial to replicate between multiple public cloud data centers (e.g. AWS availability zones).

We are now at a place where it is possible to provision a scale-out NAS solution, in the public cloud, that can deliver 100 of TBs of capacity, high availability, and  100s of MB/s of throughput--all with the flexibility and economics of the public cloud.

As you suggest--the next big step is unify NAS in the cloud with NAS in the data center.

Reply

Post a comment

 

 

 

 


(Maximum characters: 1200). You have 1200 characters left.

 

null
null

 

Subscribe to our Newsletters

Sign up now and get the best business technology insights direct to your inbox.