From the moment the first enterprise-class Flash device hit the channel, it was obvious the technology would have a dramatic impact on professional storage environments. But while so far the name of the game has been mainly swapping out short-stroked hard disk drives with more nimble SSDs, the market has taken a distinct turn toward all-Flash arrays as of late.
Research and Markets, for one, says that the enterprise Flash storage sector has nowhere to go but up. The group is calling for today’s $500 million market to top $1.6 billion as early as 2016, which would represent an impressive 60 percent annual growth. The driver, of course, is the massive amounts of data being generated by countless devices, plus the need to move large volumes quickly and easily across increasingly diverse data infrastructure. At the same time, enterprise-class features like replication and backup remain top priorities – thus, the need for new classes of Flash storage devices and integrated systems.
In fact, if you look at the market activity of late, you’ll note that it is trending away from component-level solutions and more toward all-Flash arrays, says CRN’s Joseph Kovar. Violin Memory, for example, recently shed its PCIe Flash storage business to a Korean chip manufacturer called SK Hynix, a move designed to let Violin focus on all-Flash systems. At the same time, Seagate is buying the PCIe and SSD controller units of the former LSI Inc. from its new owner, Singapore’s Avago Technologies, which should allow Seagate to develop high-speed array infrastructure. While this may seem incongruous – one company shedding PCIe while another buying it in order to develop all-Flash arrays – Kovar says both moves are calculated to play to each company’s respective strengths as the market for discrete Flash components gives way to more integrated platforms.
Meanwhile, Flash developers have not lost sight of the need for massive scale. Kaminario recently launched the latest version of the K2 array aimed at primary storage applications. The system is designed to provide scale-up and scale-out capabilities that make it suitable for use with both Big Data sets and large-volume, Web-facing workloads. At the same time, it enables key enterprise functions like inline deduplication and compression, snapshot-based replication and variable block-size algorithm support.
And supporters of the all-Flash data center say development has only just begun. Pure Storage’s Vaughn Stewart, for one, points to new software that will greatly alleviate Flash’s main weakness: too many writes wearing out the medium. With tools like real-time data reduction, Flash’s reliability and durability will only increase. And by correlation, costs will decrease because Flash drives and the arrays that house them will have longer lifespans. As well, take into consideration the fact that most applications will no longer need to be coded to accommodate the speed reductions of hard disks, which will improve the end-user experience dramatically.
Despite these gains, however, I still don’t see the end of disk storage, of even tape for that matter, any time soon. Any software upgrades to improve data flow and handling in the Flash array will also benefit the hard drive, and the simple fact is that not every application under the sun requires lightning speed or unlimited scale. Putting the entire enterprise load on Flash would be akin to allowing the entire federal workforce to commute aboard Air Force One.
With advanced tiering software now readily available and the ability to tailor virtual server and networking resources to suit application or even individual user needs, the benefits of a mixed storage environment are clear. The enterprise has an unprecedented opportunity to deliver the right storage to the right workload at the right time and, most importantly, at the right cost.