New Storage Emphasizes Quality, Not Quantity


"In business," goes the Lee Iacocca quote, "you have to lead, follow or get out of the way." Actually, that's a paraphrase of an earlier statement from Thomas Paine, but regardless, it's an apt piece of advice for the storage industry today.

If there were any lingering doubts that the days of massive storage acquisitions are over, the latest sales figures should put them to rest. IDC came out with its first-quarter sales figures for disk storage this week, and the news ain't good. The industry saw an 18.2 percent drop in worldwide sales compared to the year-ago numbers, from $6.8 billion to $5.6 billion, with the top three vendors -- IBM, HP and Dell -- recording double-digit drops.

It used to be that if capacity was approaching its limit, enterprises would simply provision more, always making sure to stay one step ahead of the data load. And even while storage is available at historically low prices, weak revenues and plain old common sense are now forcing enterprises to stretch every last byte out of existing hardware, adding capacity only as a last resort.

This is why interest in storage-management technologies, particularly those that stretch capacity such as deduplication and thin provisioning, are in such hot demand these days. Nowhere is this more obvious than the dust-up that has arisen around Data Domain. NetApp's original offer of $1.5 billion for the company is now $1.9 billion following a counteroffer from EMC of $1.8 billion. EMC, of course, is a leading vendor of the kinds of massive storage systems that are becoming harder and harder for enterprises to swallow, so it's crucial that it tap into some serious management capability sooner rather than later if it hopes to salvage any of its business as the world climbs out of recession.

Some argue that EMC has enough dedupe technology on its own and is merely trying to make NetApp pay a premium for Data Domain, but I doubt it. Data Domain has a robust platform and a sizable customer base that would make a nice fit with either company. Besides, you just don't screw around with $1.8 billion.

The changing environment is why we're also seeing new management capabilities across the storage board. Hitachi Data Systems recently launched a new management stack for the Storage Platform V called High Availability Manager. It is essentially an open clustering system designed to maximize both internal and external heterogeneous infrastructures through tools such as advanced failover, transparent pooling and array-based replication.

Dell is also upping its management level with a new backup appliance powered by software from longtime partner CommVault. The DL2000 disk-to-disk system will feature CommVault's Simpana 8, which offers a number of independent modules governing things like backup and recovery, archiving, replication, e-discovery and search. It also features a deduplication engine that dedupes data at its source, allowing users to back up to multiple devices without having to re-expand the file each time.

Even as new forms of storage hit the enterprise, the need for management isn't being ignored. IBM has added a number of tools to its SSD platforms, such as the Data Facility Storage Management Subsystem (DFSMS) for the zSeries and DS8000 platforms, and the SSD Data Balancer for the Power Systems lines.

A cynic would argue that all of these moves are mere lip-service to the cause of improved data management -- that it's in their vested interest to maintain high demand for storage by keeping it as inefficient as possible. That may have been the strategy at one time, but this is a new era we're entering. And this time, the emphasis is on value and efficiency.

I would hope that the HPs and the EMCs out there realize that if they don't start making their systems more efficient, someone else will.