A Road Map to a Converged Infrastructure
Establish a common modular infrastructure by grouping servers, storage and I/O resources into shared pools of computing resources.
That SSDs provide faster throughput, lower operational costs and high reliability is a given by now. So it should be a no-brainer that they make up at least a part of the enterprise storage infrastructure.
However, whenever new technology is integrated into legacy environments, care must be taken to ensure that problems the new technology is intended to address are indeed rectified and not merely shunted elsewhere.
For SSDs, that problem would be bottlenecks. What, asks Tintri's Ed Lee, is to prevent latency that currently builds up at the drive level from simply shifting to another component of the data architecture? Bottlenecks have a way of attracting a lot of time and money to key points in the data path. Move the bottleneck to a new point and not only has that investment diminished in value, but overall system performance is hampered because there are no tools to address issues at the new choke point. Most storage systems, he notes, were designed more than 20 years ago and are still geared toward ever larger arrays of hard disks, not Flash storage.
Still, a number of vendors have come out with all-Flash arrays that could be used as the building blocks for high-speed storage architectures. HP's additions to the LeftHand P4000 line, for example, features new management capabilities designed to gauge system performance and potential trouble spots. At the same time, smaller firms like Nimbus Data Systems are devising enterprise-class arrays with ever-increasing capacities and support for common storage protocols like iSCSI, Fibre Channel and NFS. Still, it must be noted that these systems will have to integrate into legacy storage environments as well, unless enterprises are willing to silo their Flash infrastructure.
Of course, that would fly in the face of the storage/network consolidation movement that has guided system deployment for the past five years or so. But consolidation itself has proven to be a bigger challenge than many realized considering the plethora of choices hitting the channel. As Storage Switzerland's George Crump notes, choice is a good thing but it can lead to many false promises and dead ends. And the simple fact is that legacy infrastructure is one of the chief factors guiding future deployment options - so if a solution does not work well with what you have, then it's no solution.
This brings us back to square one. Technologies like solid-state storage live up to their promises for the most part. But simply increasing throughput on the drive level is not the end of the matter. Once you recognize the data infrastructure as an organic entity in which changes in one area can vastly affect the performance of another, you can begin to recognize where current and future weaknesses lie and how best to resolve them.