Diversity of Data Requires Diversity of Storage

Arthur Cole
Slide Show

Top Trends Driving the Data Center in 2015

It’s been clear for some time that the traditional storage area network (SAN) has been under siege in the data center. With server infrastructure becoming increasingly distributed, both at home and in the cloud, a centralized array supported by advanced storage-optimized networking is increasingly seen as a hindrance to data productivity.

But if storage is to be distributed along with processing, how do you overcome the obvious difficulties of aggregating resources and establishing effective tiering capabilities? And how can you effectively scale storage independently from increasingly virtualized server and networking infrastructure in order to satisfy diverse requirements of emerging data loads?

One solution is the server SAN, says TechRepublic’s Keith Townsend. By leveraging server and storage convergence, systems like EMC’s ScaleIO and Nutanix can run traditional workloads on virtualized cloud architectures while still providing the SAN functionality that the enterprise has come to rely on.  Indeed, performance of more than 1 million IOPS is already being reported across several dozen to several hundred nodes, and free or community-based distributions are reducing start-up costs to near zero.

Somewhat newer to the market but showing exceptional promise is Flash-as-Memory-Extension (FaME), says Silicon Angle’s Bert Latamore. By swapping out the relatively slow SAN architecture with an all PCIe bus and switch configuration, enterprises gain all the speed advantages of solid state with none of the design complexity and operational hassles of current storage networking architectures. This approach also outclasses many emerging all-Flash and hybrid storage solutions, such as in-memory and distributed node configurations, by closely aligning compute and storage resources and utilizing data reduction and snapshots to lower costs and improve resilience.

Despite these advances, there is not likely to be a single storage solution for all use cases, says PC Connection’s Kurt Hildebrand. The only thing we can say for certain at this point is that storage performance will be measured less by capacity and warehousing capabilities and more by flexibility and intelligent management to ensure that the appropriate resources are made available to varying workloads. To that end, expect to see a mix of all-Flash and hybrid arrays and automated storage tiering that can assign workloads more quickly and accurately to the optimal medium. And the caveat to this, of course, is that the lowest-cost solution will not always be the best choice, especially if it provides diminished performance for increasingly complex data sets.

Data Storage

This applies to Flash storage just as much as hard disk. As computing.com’s John Leonard noted recently, simply swapping out old disk drives for newer solid-state devices does not guarantee optimal storage performance. With each deployment, the enterprise needs to assess compatibility factors with legacy infrastructure, as well as changes to tiering, caching and other management functions. And with Flash still costing a premium over disk, there is always the danger of storage overkill if data loads start to encroach upon high-performance architectures even though they perform perfectly well on slower, less dynamic infrastructure.

Although it is tempting to think that advanced data architectures will take all the guesswork and design optimization issues out of the storage deployment process, the fact is that a plug-and-play, set-it-and-forget-it storage is not, and probably never will be, in the cards.

The plethora of storage options hitting the channel will make it easier to optimize storage for key workloads, but it will nevertheless require deep knowledge of both the technology under consideration and the data needs of the enterprise in order to provide the highest level of service to today’s data-driven enterprise.

Arthur Cole writes about infrastructure for IT Business Edge. Cole has been covering the high-tech media and computing industries for more than 20 years, having served as editor of TV Technology, Video Technology News, Internet News and Multimedia Weekly. His contributions have appeared in Communications Today and Enterprise Networking Planet and as web content for numerous high-tech clients like TwinStrata, Carpathia and NetMagic.

Add Comment      Leave a comment on this blog post
May 22, 2015 7:28 AM Jim Bahn Jim Bahn  says:
As a company which helps IT shops test storage performance, Load DynamiX can back up your assertions. We've been brought in to help customers select a single storage solution via a bake-off, because there ARE benefits to using fewer different products. We find that different arrays often perform quite differently for different workloads. Customers often wind up selecting more than one array/vendor. The lesson is: don't rely too much on benchmarks, and test with YOUR workloads if you can. Reply

Post a comment





(Maximum characters: 1200). You have 1200 characters left.



Subscribe to our Newsletters

Sign up now and get the best business technology insights direct to your inbox.