SSDs in the Storage Array: Square Pegs in Round Holes?

Arthur Cole

The big news in enterprise storage continues to revolve around the use of solid-state disks (SSDs) in architectures that were once the exclusive domain of hard disks.


And yet, despite all the headlines of new technologies and faster performance, actual deployments are still relatively rare, outside of a few showcase data centers. So far, the reasons given for this lack of enthusiasm center around price and reliability, but could there be another factor at work here?

The Taneja Group's Jeff Boles thinks there is. As he sees it, putting SSDs on standard hard-disk form factors -- namely the 2.5-inch and 3.5-inch drives that are commonplace these days -- implies that they are intended to replace the hard drives in modern disk arrays. What nobody bothers to mention is that since these arrays were originally designed for spinning platters, they lack the capability to process data at SSD speeds. So what you end up with is a very expensive drive that, in a practical sense, is no faster than a cheaper hard drive once the data hits the controller -- and the problem only magnifies as the number of SSDs increases. There's also the fact that most arrays can't share SSDs across data sets and have trouble migrating volumes in and out of SSDs.

Some of the performance characteristics of SSDs are being blown out of proportion as well, according to Steve Sicola, CTO of Xiotech. He recently told an audience in San Diego that performance can vary widely from disk to disk, a problem that can be compounded if operating systems and applications are not geared toward SSDs. You also should remember that with the technology's poor showing in both reliability and data handling, you will likely need to double the recommended capacity to gain the desired results.

One solution, of course, is to deploy a dedicated SSD array. Texas Memory Systems recently took the wrappings off the RamSan 6200, which can deliver 100 TB and 5 million IOPS at 60 GBps in a 40U configuration. The downside, of course, is the price: a single 1U RamSan-620 unit runs about $220,000, which puts the full array at close to $4.5 million.

Some storage experts say the way to go is to install SSDs in existing arrays as tier 0 storage, reserved for the most time-critical data. That's probably the only way to go at the moment, but given that the rest of the array architecture remains the same, logic would dictate that this will still slow down overall performance.

As with most technology issues, however, this one seems easily solvable. But it don't look for it on the next generation of drives. What's needed is an optimized array -- one that can handle the vagaries of both hard disk and solid state.

Add Comment      Leave a comment on this blog post
Aug 25, 2009 8:11 AM kostadis roussos kostadis roussos  says:


I could not agree more with you. The phenomenon you are describing is the mismatch between the IOPS data density

and IOPS density

of SSD. Most data of most applications has a very small subset which requires a lot of IOPS, because applications have architected themselves around assumptions about the memory hierarchy.

EMC's FAST is, in my mind, an admission that the SSD form factor without some kind of data movement between tiers that is automatic is dead.

I'll contend that SSD's may exist inside of storage arrays, but they will be a boutique and ultimatley uninteresting technology.

I write about the SSD mistake in my series on the DAS disruption in my blog.



Sep 13, 2009 11:26 AM Tony Tony  says: in response to kostadis roussos


Are you serious when you say ...

"I'll contend that SSD's may exist inside of storage arrays, but they will be a boutique and ultimatley uninteresting technology."

Technology for the sake of technology maybe interesting to a few but of little business value if not utilized properly.

Maybe that is the comment you need to make about EMC's FAST. Sounds like a kludge to get around a problem caused by differing disk technology needing to work together cohesively.

Maybe your negativity towards SSD in the enterprise has more to with the fact that NetApp dont have a solution to sell here. I am sure if they did your stance would be totally different.

The truth is that both EMC and NetApp are unable to make full and proper use of SSD disks with the current state of array technology, thus failing to deliver business value. EMC have decided to kludge their way through it and NetApp have decided to pour cold water over it.

Why doesn'teither company innovate? - Maybe it is because you have lost the spirit of innovation ...

There are solutions available from other vendors that can get business value from this "boutique and uninteresting technology" ...

I am sure the when Gottlieb Daimler and Karl Benz started to build cars that many people may have regarded them as unintersting and boutique in comparison to the horse and cart ...

Good luck selling the Horse and Cart ....


Sep 15, 2009 4:05 AM Anthony Anthony  says: in response to Tony


1) In two process generations, NAND Flash latency will be worse than spinning disk.

2) The dramatic, further price declines in NAND Flash chip that most people think are coming...are not coming.

3) Even with today's SSD performance, real-world application workloads produce only about 10% of the SSD manufacturer's claimed performance.

To make it worse, Flash performance is getting worse, not better as lithography shrinks and bits-per-cel goes up. The astronomical costs of the new silicon foundries that can handle these new silicon fab processes mean that Flash cost/GB is not going to come down fast enough and Flash, after gaining ground for years, will now lose ground against spinning disk.

So...Flash performance is getting worse and costs to produce it are going up. There's a formula for success!

Now, the problem for Flash SSD in enterprise apps (even before the "lithography death-march", as Sun's CTO called it) is ROI.

There isn't any.

There are really good reasons we have never seen an SSD-based system on the TPC-C benchmark. TPC-C is the most popular and respected audited benchmark on the planet for transaction-oriented applications -- especially because it tells the bottom-line so clearly, in terms of "dollars-per-TPMc" which is the total cost of the system divided by the number of transactions/minute it can do. Typically, the cost of a TPC database system is 85% storage -- usually 300-400 spinning disks per server!!! If SSD'd really could cost effectively replace hundreds (or even dozens) of spinning disks, they's deliver one helluva ROI, right?

And if SSD were really able to deliver a strong ROI in cost/performance terms, we'd have seen these benchmarks by now...but we haven't.

To get a clue why not, check out the SPC-1 that IBM ran on the STEC SSD.

IOPS on SSD are supposed to be (vastly) cheaper that HDD, right? That's the WHOLE premise of the SSD value proposition -- that's the reason why people are supposed to want to pay more for capacity -- because those "solid-state IOPS" are SOO CHEAP, right?

Unfortunately, when you run a REAL workload, using a third-party audited and certified benchmark like SPC, we find out that SSD IOPS are NOT 100x cheaper than HDD, nor 10x cheaper, nor even 1.5x cheaper. SSD IOPS are no cheaper at all (actually, slightly more expensive.

That's the difference between "real-world" application performance and the SSD vendor spec-sheets.

Meanwhile, disk capacity in this same benchmark was 135 times (that's 13,500%)  more expensive.

In REAL WORLD enterprise applications, SSD technology delivers IOPS that are no cheaper than spinning disk, and delivers capacity that is astronomically MORE costly.

That's why SSD's are "a boutique and ultimately uninteresting technology.", or as I I've heard it said, "there's a turd in the punchbowl at the SSD hype party"


Post a comment





(Maximum characters: 1200). You have 1200 characters left.



Subscribe to our Newsletters

Sign up now and get the best business technology insights direct to your inbox.