Solid-state storage is making significant inroads into enterprise infrastructure, primarily as a server-side cache to speed up data I/O for time-sensitive, mission-critical applications.
Less common, but still just as valuable to overall data operations, is the rise of hybrid storage architectures, essentially the same practice as server cache except it ramps up throughput at the storage array. In fact, hybrid storage architectures will likely benefit the enterprise to a higher degree considering it’s the storage side of the house, not the server, where most of the bottlenecks lie.
Ten Steps You Shouldn't Take to Make Storage Run Faster
This is not the first time designers have attempted to boost storage performance with advanced caching techniques. As Enterprise Storage Forum’s Henry Newman notes, however, previous designs were less than adequate for most applications because the cache was either too small or too far from regular storage to make much of a difference. Today’s systems not only feature high-speed flash and/or DDR memory solutions but encompass advanced techniques to help guide volume and thread allocation, file management and other crucial factors. Nevertheless, optimal performance is never a factor of infrastructure alone, so enterprise executives need to synchronize caching on the storage system and the application level to maximize functionality.
For this reason, nearly all the new hybrid systems are comprised of integrated hardware/software platforms designed to coordinate traffic across the various storage tiers. Starboard Storage Systems, for instance, has upgraded its mid-level AS2000, 4000 and 4500 appliances with a new version of the Starboard OS. The package includes the CacheControl module that provides an eight-fold improvement in read efficiency over traditional caching approaches using a mix of acceleration, compression and dynamic pooling techniques. The top-end 4500 device provides four write caches and up to 12.8 TB of read cache that functions across Fibre Channel, iSCSI, CIFS and NFS workloads.
Meanwhile, Avere Systems has a new hybrid appliance, the FXT3800 edge filer, that combines DRAM, NVRAM and SSD storage with high-speed networking to enable broad data tiering across legacy RAM, SSD, SAS and SATA environments. The company is pitching the system as an effective way to allow cloud storage architectures to make the leap from backup and archiving applications to higher order functions by moving active data out of core storage and onto the enterprise edge. And they are already claiming better SPEC performance than both EMC and NetApp.
Oracle is also quickly adding solid-state modules to its storage lines as a means to supplement the performance of the new SPARC T5 and M5 platforms. The strategy encompasses everything from the entry-level Sun ZFS 7120 to the top-end 7420 that maxes out at 2.6 petabytes. The systems mix standard 7200 rpm disk drives with on-board DRAM, supported by automated tiering software to manage hot and cold data. At the same time, the systems are backed by 40 Gbps InfiniBand and up to 14.5 TB of total flash cache.
There are still many who argue that any spinning media in the storage array is a waste. All-flash arrays are capable of providing performance boosts across the board, not just for critical loads, and the cost differential between hard disk and solid state is rapidly approaching full parity. Others beg to differ, however, and the argument is likely to go on for some time.
But the fact remains that enterprise executives are naturally hesitant to entrust an important function like storage to a new technology all at once. Flash-based cache, then, is the perfect opportunity to get your feet wet, if only to accurately gauge the technology’s strengths and, yes, weaknesses (cough, durability).
Diversity among data loads will only become more acute as the cloud and mobility become commonplace in the enterprise. It’s only fitting that diversity of storage media should as well.