Adding solid-state disks to your storage infrastructure does wonders to alleviate the bottlenecks impairing application performance. But faster throughput alone will not solve all the problems brought on by the relentless increase in data loads.
The fact is that adding a tier of ultra-fast storage can complicate your efforts to manage data flow, leading to ever-diminishing returns as network managers struggle to ensure data is always available from the appropriate source.
Small wonder, then, that storage-automation companies are racing to max out the SSD capabilities on their latest platform generations. 3PAR, for one, is matching its Adaptive Optimization software found in the InServ F- and T-Class storage servers with STEC's enterprise-class MACH8IOPS SSD, a move that delivers what the companies call "autonomic storage tiering" that dynamically assigns data to the appropriate storage medium at the sub-volume level.
This kind of automation is also finding its way to very finite applications. The recent combination of FalconStor's NSS SAN accelerator and Violin's 1010 Flash memory appliance is designed to add an SSD tier to virtual SAN environments, where it can be used as a high I/O cache for critical application data. The setup provides the benefits of low-latency reads and writes even as larger volumes are heading to longer-term SAN storage.
The need to automate solid-state storage is leading to a reappraisal of SSD performance factors among some storage experts. Adam Day of systems integrator SYSDBA, which represents 3PAR among others, cautions against simple comparisons in terms of capacity or throughput. A more appropriate metric would be how well they lend themselves to automated environments, which should tell you more about whether a particular system will help achieve business objectives.
But don't make the mistake of overdoing an automated infrastructure either, according to Storage Switzerland's George Crump. Many organizations simply don't produce enough I/O-intensive data to justify a fully automated platform and would probably do better with targeted PCIe-based SSDs on select servers. If you do pursue an aggressive automation strategy, make sure your targets are optimized for the type of storage your require -- nearline, backup, archiving, etc. -- and be wary of systems that scatter data across various tiers unless you plan to implement file virtualization as well.
In general, automation goes a long way toward streamlining data center operations and leveraging hardware to its full capacity. But it can also lead to overly complicated data environments if not maintained and recalibrated on a regular basis.
Still, as the range of data platforms continues to multiply due to virtualization, the cloud and other advancement, automation will cease to be a luxury for the well-heeled and will become a necessity for the masses.