The big news in enterprise storage continues to revolve around the use of solid-state disks (SSDs) in architectures that were once the exclusive domain of hard disks.
And yet, despite all the headlines of new technologies and faster performance, actual deployments are still relatively rare, outside of a few showcase data centers. So far, the reasons given for this lack of enthusiasm center around price and reliability, but could there be another factor at work here?
The Taneja Group's Jeff Boles thinks there is. As he sees it, putting SSDs on standard hard-disk form factors -- namely the 2.5-inch and 3.5-inch drives that are commonplace these days -- implies that they are intended to replace the hard drives in modern disk arrays. What nobody bothers to mention is that since these arrays were originally designed for spinning platters, they lack the capability to process data at SSD speeds. So what you end up with is a very expensive drive that, in a practical sense, is no faster than a cheaper hard drive once the data hits the controller -- and the problem only magnifies as the number of SSDs increases. There's also the fact that most arrays can't share SSDs across data sets and have trouble migrating volumes in and out of SSDs.
Some of the performance characteristics of SSDs are being blown out of proportion as well, according to Steve Sicola, CTO of Xiotech. He recently told an audience in San Diego that performance can vary widely from disk to disk, a problem that can be compounded if operating systems and applications are not geared toward SSDs. You also should remember that with the technology's poor showing in both reliability and data handling, you will likely need to double the recommended capacity to gain the desired results.
One solution, of course, is to deploy a dedicated SSD array. Texas Memory Systems recently took the wrappings off the RamSan 6200, which can deliver 100 TB and 5 million IOPS at 60 GBps in a 40U configuration. The downside, of course, is the price: a single 1U RamSan-620 unit runs about $220,000, which puts the full array at close to $4.5 million.
Some storage experts say the way to go is to install SSDs in existing arrays as tier 0 storage, reserved for the most time-critical data. That's probably the only way to go at the moment, but given that the rest of the array architecture remains the same, logic would dictate that this will still slow down overall performance.
As with most technology issues, however, this one seems easily solvable. But it don't look for it on the next generation of drives. What's needed is an optimized array -- one that can handle the vagaries of both hard disk and solid state.