There's More Than One Way to Unify Storage

Arthur Cole
Slide Show

The Must-Have Features of NAS

Our Paul Mah looks at the necessary elements of any NAS deployment.

For a concept that is generally met with a high degree of approval, unified storage still manages to generate a fair share of controversy.

On the one hand, it's hard to argue against a single storage platform that handles both block and file storage, and then throws in support for Fibre Channel, iSCSI and NAS as well. But while the goals are clear, the methods are not, or, as the old saying goes: It's not what you do, but how you do it.

Hitachi made headlines this week with a new mid-level system, the HUS 100, aimed at relatively small volumes that were previously the purview of the company's AMS line. The system relies on a single management infrastructure built around the SiliconFS file system that Hitachi acquired last year. Most importantly, it uses object-based management for advanced metadata-reliant features like retention and availability that should help enterprises fold mission-critical applications and data into the system.

Unified storage is likely to emerge as a crucial tool for small and medium-sized business in their drive to tackle exploding data loads, according to Storage Switzerland's George Crump. Like their larger brethren, SMB's have Big Data worries of their own, but they don't have the means or manpower to maintain SAN and NAS infrastructure. A unified system, like IceWEB's 7000 platform and IceSTORM operating system, offers a low-cost means to enable multi-protocol operations and eliminate expensive file servers and other hardware.

The key to evaluating claims of unified storage is a careful look under the hood, according to tech consultant Chris Evans. If two separate systems, in Hitachi's case the AMS2 array and the BluArc NAS gateway, are rolled into a single system, is that really unified? If not, then what about some of the other top systems out there, say, EMC's new VNX platform that is essentially a combined CLARiiON/Celerra system? NetApp seems to have a singular unified system but it lacks the punch to accommodate heavy workloads.

It's also true that what generally makes or breaks a unified system is the quality of its software. And since open source solutions pride themselves on breaking down barriers between incompatible data environments, it's no surprise that firms like Red Hat are working toward greater storage unification. The upcoming Red Hat Storage 2.0, while not what some would call fully unified, is aimed at treating Hadoop File System (HDFS) files as objects so they can be exported to other environments. The company says this goes one better than traditional unified storage approaches because it enables multi-vendor compatibility on low-cost commodity hardware.

Unification, then, is largely in the eye of the beholder. In the drive to streamline data center architectures, however, terminology is less important than results. Nearly every enterprise has a unique storage infrastructure to contend with, so the road to unification will be varied as well.

Subscribe to our Newsletters

Sign up now and get the best business technology insights direct to your inbox.


Add Comment      Leave a comment on this blog post
Apr 30, 2012 1:09 PM Dimitris Krekoukias Dimitris Krekoukias  says:

Hello all, Dimitris from NetApp here.

I take exception to this comment, taken out of context from Chris Evans' article:

"NetApp seems to have a singular unified system but it lacks the punch to accommodate heavy workloads."

Some of the world's biggest workloads are running on NetApp's unified storage.

CERN has a cool writeup:


In general you're looking at the largest DBs (including largest data warehouses), largest cloud storage installations, largest service providers, running on NetApp storage.

Possibly one needs to define what a "heavy workload" is.

Apr 30, 2012 1:24 PM Dimitris Krekoukias Dimitris Krekoukias  says: in response to Dimitris Krekoukias

Since there's a character limit, here is the continuation...


Shows a record-setting result for a "heavy" benchmark.

In general, NetApp systems are optimized for real-world applications: Databases, virtualization, email, file services.

They are designed to provide high performance under complex concurrent access.

Most arrays, especially ones without a lot of features, can perform really well if 1-2 servers are hitting the array.

Next time analysts or customers conduct a performance test, do try to take into account parallelizing the test, to simulate a real-world scenario.

Don't have one server hitting the storage. Have 20 servers hitting the storage.

The results might surprise you.




Post a comment





(Maximum characters: 1200). You have 1200 characters left.




Subscribe Daily Edge Newsletters

Sign up now and get the best business technology insights direct to your inbox.

Subscribe Daily Edge Newsletters

Sign up now and get the best business technology insights direct to your inbox.