SSDs and the Need for Better Data Management

Arthur Cole

So much attention has been given to the speed and performance improvements of SSDs over hard disks that it's easy to forget one central fact: Individual disks are merely components in an organic, holistic network environment.

That means if you deploy SSDs expecting to see an instant boost in I/O, as they say in Brooklyn, fuhgeddaboudit.

Jeff Boles of The Taneja Group does a good job of spelling out the dilemma that many new enterprise SSD users are finding themselves in. SSD performance is often so high that it can't be matched by standard storage controllers and caching software. Many arrays also do not have the ability to share SSDs across multiple data sets or cannot deliver advanced features like thin provisioning and shapshotting, making it difficult to place the most appropriate data on the SSD tier.

For these and other reasons, it's impossible to see the full benefits of SSDs without overhauling the entire storage architecture, which is both expensive and mind-bogglingly complicated, says Enterprise Strategy Group's Steve Duplessie. What's needed, he says, is a dynamic, automated means to optimize connection performance between user and data through some sort of centralized, intelligent control of availability, routing, application priority and other elements. VMware is working on the problem, but it will have to step on a lot of sensitive, proprietary toes to pull it off.

In the meantime, then, we'll have to be content with some of the new SSD management systems coming out. IBM recently launched the new Smart Data Placement tool to coincide with the new STEC SSDs it's adding to the Power6 iSeries server. The company says that by diverting performance-hungry data such as indices and hot tables found in relational databases and Web applications directly to SSDs, it can reduce physical storage footprints by 80 percent and improve response times eight-fold. The key benefit is that it reduces the need to short-stroke magnetic media, freeing up more space for general storage needs.

Sun Microsystems has taken a similar tack by adding the SSD management stack found in the Amber Road storage line to the ZFS file system in the new OpenSolaris platform. The system identifies workloads and then assigns them to the most appropriate storage, flash being reserved for high-performance workloads that the ZFS system can then manage as pools without the need for individual caches on the controller. Sun is also breaking with tradition by offering the system for free.

On one level, you can't fault SSD backers for touting the technology's superiority when it comes to speed and power consumption. But it's going a little overboard to say that these devices will remake enterprise storage as we know it.

Pound for pound, SSDs have a lot of advantages over traditional media, but a whole new storage environment will require, well, a whole new storage environment.

Add Comment      Leave a comment on this blog post
Jun 19, 2009 5:52 AM George Ludgate George Ludgate  says:

I want it to be used as a cache by database vendors. After running for a while most databases would be entirely in the cache. As there are many more reads compared to writes in a typical database use, the speedup would be enormous. I do not believe it would be hard to do as dbms vendors already have in-memory caches - and these disks could be used similarly. Go for it Oracle.



Post a comment





(Maximum characters: 1200). You have 1200 characters left.



Subscribe to our Newsletters

Sign up now and get the best business technology insights direct to your inbox.