Is Flash Ready to Step Up to the Primary Storage Plate?

Arthur Cole
Slide Show

A Guide to the Different Varieties of Flash Technology

These days, hardly a data center on the planet has not deployed Flash storage in one fashion or another. The vast majority are using it as a high-speed cache to support their most time-critical applications while keeping the bulk of primary storage applications safely aboard spinning disks.

The Flash industry, of course, has no intention of playing a vital, but still subordinate, role in the data center. It has long touted the technology’s acumen as a primary storage solution. But is the enterprise ready for Tier 1 Flash solutions? And if not, is there a way for primary Flash solutions to take the lead in advanced storage architectures without the enterprise’s help?

At the moment, IBM estimates that Flash accounts for about 10 percent of the overall enterprise storage spend, but this is enough to justify the company’s $1 billion stake in the technology. The company is betting big that Flash will account for about 20 percent of the primary storage market alone within the next few years. As company execs pointed out to IT Jungle recently, few enterprises will become all-Flash shops, but primary storage will likely be a growth area considering that is where Flash can have the greatest impact on data operations. To that end, the company is adding a slew of advanced enterprise features, such as compression, RAID protection and up/out scalability, to products like the V840 system.

Indeed, an all-Flash array on the top tier can dramatically improve performance from the 4,000 or so IOPS of a hard disk to more than a million, says storage consultant Jim O’Reilly. Combine this with auto-tiering, deduplication, cloning and a host of other advanced features and you not only have the ability to boost critical workloads but also lower the overall cost of storage by utilizing only what is needed. The biggest problem when integrating Flash into legacy environments, in fact, is that it often overwhelms related devices like storage controllers and network switches and routers. Still, the number of use cases for Flash in primary settings is increasing, from general workload support to specialty applications like virtual desktops and video editing.

Flash is also making inroads in new hyperscale and hyperconverged platforms, leading to the very real possibility that it will emerge as the top dog in cloud and colocation settings even if the enterprise itself chooses to leverage its traditional media a while longer. Nutanix, for one, recently added hyperconvergence and long-haul clustering to its Virtual Computing Platform in the form of the NX-9000 appliance. The device offers linear, single-node scalability and localized storage that reduces network complexity and latency, while at the same time offering varied I/O sizing to adjust to multiple workload requirements. And the Metro Availability feature allows for a single data store to be distributed as far as 400km, providing synchronous mirroring and other tools to enhance failover, disaster recovery and other functions.

Meanwhile, Pure Storage is leveraging hyperconvergence as a means to get beyond mere replacement of disk drives in the enterprise to provide primary storage for the next-generation data ecosystem, according to Extreme Tech’s Timothy Prickett Morgan. The company has always maintained that it is not interested in becoming the “Flash tier” of a mixed storage environment, but instead has been working diligently to achieve parity with magnetic media in key areas like compression, snapshot capability and, most importantly, cost. By shifting the focus from replacing legacy disk arrays to crafting next-generation cloud and webscale environments, Pure Storage is setting itself up to take on commodity server hardware and even might have something to say about how database and file system software is designed.

If the strategies of Pure Storage, Nutanix and other Flash vendors pan out, it will have repercussions far beyond mere storage. It would be a sign that the traditional enterprise is no longer calling the shots when it comes to developing and deploying next-generation data architectures.

This is not necessarily a bad thing for the enterprise, mind you, because it would allow them to shift resources away from data infrastructure and more toward the business activities that actually make money. But it does mean that enterprise IT will no longer be in control of its own destiny, having to take the resources and services that are offered rather than crafted from the ground up to suit their own ends.

This is a significant shift in the enterprise-infrastructure relationship, and one that should be considered carefully by those whose job it is to maintain data functionality for the modern data workforce.

Arthur Cole writes about infrastructure for IT Business Edge. Cole has been covering the high-tech media and computing industries for more than 20 years, having served as editor of TV Technology, Video Technology News, Internet News and Multimedia Weekly. His contributions have appeared in Communications Today and Enterprise Networking Planet and as web content for numerous high-tech clients like TwinStrata, Carpathia and NetMagic.

Add Comment      Leave a comment on this blog post

Post a comment





(Maximum characters: 1200). You have 1200 characters left.



Subscribe to our Newsletters

Sign up now and get the best business technology insights direct to your inbox.