It may be true that NAS architectures are more suited to the cloud due to their broad scale-out capabilities, but let's face it: They still could use some improvement.
Particularly when it comes to critical applications and data, which, after all, is the holy grail when it comes to enterprise services, NAS suffers from significant cost, performance and continuity limitations -- at least if you intend on utilizing all the scalability and flexibility advantages the cloud promises.
It seems lately, however, that many of these issues are fading away, as each new generation of NAS technology becomes increasingly optimized for the cloud rather than traditional enterprise infrastructure.
DataCore Software, for example, recently unveiled SANsymphony-V, the latest version of its NAS software suite, which targets NAS's single-point-of-failure handicap by adding a mirroring function that places copies of the software under the NAS layer. This allows any storage device to be accessed for additional capacity, ensuring that both virtual and cloud applications will be able to access needed storage at any time. The package is optimized for the Microsoft Windows Server 2008 R2 platform, particularly those storing vSphere, Hyper-V or XenDesktop VMs as shared network file system (NFS) or common Internet file system (CIFS) files.
Enterprises are also finding out that expanding the cloud onto clustered NAS infrastructure does not guarantee them full control of their own data. As cloud resources continually reconfigure themselves, vital data could wind up copied, stored and preserved without your knowledge. Nasuni has targeted this little problem with the latest Nasuni Filer, which captures continual snapshots of the entire file system that, when combined with a proprietary cache system, enable enterprises to track where their data has gone. The snapshots can be used to recover lost data or to ensure that data is permanently deleted from the cloud once it is no longer useful. The snapshots are deduplicated and compressed to streamline storage needs.
Under the right architecture, this kind of snapshot capability can do more than just ensure data integrity and compliance, says tech consultant Howard Marks -- it can replace many of the expensive back-up and storage systems currently draining IT budgets. Since most gateways provide an unlimited number of snapshots in the cloud while reserving several TB of redundant cache for active data, they not only eliminate the performance penalties snapshots incur but they provide access to critical data much faster than a traditional backup architecture.
All true, but suppose you're stuck with a legacy infrastructure that consists of numerous single-vendor islands of NAS? How can they be unified to form the kind of integrated storage architecture required by the cloud? Avera Systems says it has an answer in the form of a new global namespace (GNS) system as part of its FXT appliance. The system provides a single GNS across all NAS storage servers that lets you build and manage logical storage sets regardless of their physical location. The company says the benefits are two-fold. First, NFS and CIFS clients gain a simplified and transparent data access point, and secondly, downtime is eliminated when data is moved across various storage servers.
As a file-based storage platform, NAS will always be the more robust solution when it comes to Internet-based cloud services. It doesn't have the ability to send entire blocks of data all at once like SAN does, but its flexibility and scalability advantages more than make up for that.
And if the expansion into NAS-based cloud storage is accompanied by storage unification within the data center, there should be no reason why future enterprises can't provide access to whichever storage environment is required at any given time.