New Options for Solid-State Cache

Arthur Cole

The movement toward solid-state cache is in full swing, giving enterprises a much-needed boost in application performance and critical data throughput. But while it may seem like adding SSD cache is a no-brainer, the fact is that multiple options are already starting to emerge, and the choices you make may have a significant impact on overall performance.

According to TheInfoPro, SSD/flash cache could see growth of 36 percent this year, rising from a bare 2 percent in 2011. Of the deployments so far, nearly half involve adding solid-state drives to existing storage arrays, with another 11 percent or so coming in the form of server-side Flash. That means there is still plenty of growth in a market that is trying to satisfy the conflicting demands of greater throughput and performance for the cloud and lower costs and energy consumption for the green IT movement.

In a typical deployment, a solid-state drive or card is added directly to the storage array or server where it can be used to house high-priority or highly active data. Users gain the advantage of the high read/write capabilities of SSDs compared to traditional spinning media, coupled with advanced tiering software that prioritizes data to the appropriate medium.

One of the main drawbacks of server-side solutions is the inability to pool cache resources, effectively forcing servers to rely solely on its local cache. QLogic says it has come up with a fix in its Mt. Rainier system by moving the cache behind the Fibre Channel HBA. In this way, one server has access to the cache on other machines just as it would any other networked storage resource. And since all HBAs share a common communications protocol they can constantly ping one another to determine how much cache they are currently maintaining and what their I/O levels are like. It also cuts down on the number of drivers needed for HBAs, adapters, filtering and related software.

Flash storage is a relative newcomer to the enterprise, and organizations should expect to pay top dollar for the ability to handle high-speed, high-density environments. However, Marvell is close to releasing a cache accelerator capable of boosting the performance of low-cost consumer technology to enterprise levels. The DragonFly NVRAM and DragonFly NVCache systems will come in the form of PCIe adapter cards designed to bolster the read/write performance of up to four consumer SSDs per card, providing a max capacity of 1.5 TB. The cards are compatible with SAS or SATA 2.5-inch drives and offer advanced features like peer-to-peer mirroring and automated backup to non-volatile memory in case of power interruption.

New software tools are also poised to bring new capabilities to Flash-based storage. SanDisk Corp., for example, has updated its FlashSoft system to double or even triple both application performance and virtual machine density in virtual environments. The software embeds itself in the resident virtualization platform, providing native OS support for both single servers and cluster configurations and bolstering I/O to individual servers or virtual disks by shifting hot data to server-side Flash tiers. The system supports all standard virtualization features, such as high availability, VM migration and snapshot or linked clones, with no changes required for VMs, applications or storage infrastructure.

As I've noted in the past, adding solid-state storage directly to the server opens up a range of possibilities besides simple caching. As capacities grow, it's reasonable to begin to question the need for networked storage and all the complexities that go with it.

A fully localized, all-SSD storage architecture may have many advantages, but the immediate application as a high-speed cache will probably have to run its course before enterprises feel comfortable taking the next step.

As the old saying goes: "You have to walk before you can run."



Add Comment      Leave a comment on this blog post
Sep 26, 2012 2:26 PM StorageOlogist StorageOlogist  says:
Nice article. I do think that one point needs to be clarified. caching of fast media in front of higher capacity lower cost media will always be around. Even in an all solid state world of the future. Today the best combination of price:performance in an array for the majority of applications is from flash/SSD at the front end and HDD at the back end. All solid state is not a panacea. It is simply a change of media. You still have fast and expensive and high capacity and lower cost. Here is a link to a blog i wrote. "A little bit of flash goes a long way" http://blog.starboardstorage.com/blog/bid/209395/A-little-bit-of-flash-goes-a-long-way-How-to-make-economical-use-of-SSD Reply

Post a comment

 

 

 

 


(Maximum characters: 1200). You have 1200 characters left.

 

null
null

 

Subscribe to our Newsletters

Sign up now and get the best business technology insights direct to your inbox.