For a concept that is generally met with a high degree of approval, unified storage still manages to generate a fair share of controversy.
On the one hand, it's hard to argue against a single storage platform that handles both block and file storage, and then throws in support for Fibre Channel, iSCSI and NAS as well. But while the goals are clear, the methods are not, or, as the old saying goes: It's not what you do, but how you do it.
Hitachi made headlines this week with a new mid-level system, the HUS 100, aimed at relatively small volumes that were previously the purview of the company's AMS line. The system relies on a single management infrastructure built around the SiliconFS file system that Hitachi acquired last year. Most importantly, it uses object-based management for advanced metadata-reliant features like retention and availability that should help enterprises fold mission-critical applications and data into the system.
Unified storage is likely to emerge as a crucial tool for small and medium-sized business in their drive to tackle exploding data loads, according to Storage Switzerland's George Crump. Like their larger brethren, SMB's have Big Data worries of their own, but they don't have the means or manpower to maintain SAN and NAS infrastructure. A unified system, like IceWEB's 7000 platform and IceSTORM operating system, offers a low-cost means to enable multi-protocol operations and eliminate expensive file servers and other hardware.
The key to evaluating claims of unified storage is a careful look under the hood, according to tech consultant Chris Evans. If two separate systems, in Hitachi's case the AMS2 array and the BluArc NAS gateway, are rolled into a single system, is that really unified? If not, then what about some of the other top systems out there, say, EMC's new VNX platform that is essentially a combined CLARiiON/Celerra system? NetApp seems to have a singular unified system but it lacks the punch to accommodate heavy workloads.
It's also true that what generally makes or breaks a unified system is the quality of its software. And since open source solutions pride themselves on breaking down barriers between incompatible data environments, it's no surprise that firms like Red Hat are working toward greater storage unification. The upcoming Red Hat Storage 2.0, while not what some would call fully unified, is aimed at treating Hadoop File System (HDFS) files as objects so they can be exported to other environments. The company says this goes one better than traditional unified storage approaches because it enables multi-vendor compatibility on low-cost commodity hardware.