The all-Flash data center—it used to be considered something of a pipe dream. While solid-state storage has its uses, both costs and the complexity of modern data environments seem to demand mixed storage architectures for the time being. But as costs come down, more storage experts are looking at all-Flash, or perhaps Flash-dominant storage environments.https://o1.qnsr.com/log/p.gif?;n=203;c=204663295;s=11915;x=7936;f=201904081034270;u=j;z=TIMESTAMP;a=20410779;e=i
Storage has always been the laggard in the data-handling relay race, but recently the disparity has become stark. As virtual and cloud environments shift the burden away from processing power and even storage capacity, speed has become the determining factor in high-performance environments. According to Kaminario, more than 90 percent of the performance issues afflicting leading applications these days can be traced to storage. Whether it is web-facing OLTP or Big Data OLAP batches, the I/O culprit is almost always poor random read/write performance in legacy HDD arrays. The results were largely same across Oracle, SQL, DB2, MySQL and even unstructured data sets.
Still, solid-state storage costs more per GB than hard disks and therefore will have a long life supporting non-time-sensitive workloads, right? Well, even that assumption is starting to fall apart when you take the entire storage environment into consideration. As Whiptail’s Darren Williams points out, Flash represents the next great advancement in energy reduction in the data center now that server environments are largely virtualized. As the industry transitions from smaller, enterprise-based data infrastructure to regional, hyperscale cloud facilities, volume Flash deployments may finally bring the price below HD in both CAPEX and OPEX. And if infrastructure in these behemoths becomes as increasingly modular as expected, much of the complex networking surrounding current storage arrays may fall by the wayside as well.
It is also likely that Flash storage is about to get even faster. A company called Diablo Technologies released the Memory Channel Storage (MCS) architecture that allows Flash modules to access the CPU via existing memory channels. The company claims MCS provides an 85 percent latency reduction compared to traditional PCIe methods, which shows the potential for development of terabit-level, on-server memory. The design is also highly resistant to I/O bottlenecks because most memory channel architectures are designed for parallel DIMM access to begin with. Plus, CPU performance is enhanced because there is no need for bus management.
Given the recent developments in Flash, the question for most enterprises is not whether to deploy Flash, but how. As Storage Switzerland’s George Crump points out, the gamut runs from standard SSDs in traditional disk storage arrays, to new generations of all-Flash arrays, to advanced on-server and in-line caching and memory approaches. Each solution presents its own pluses and minuses, and either side can be exacerbated by the data sets they are expected to handle. The rapid pace of acquisitions and partnership agreements between and among the leading industry players doesn’t make deployment decisions any easier, either.
It may take a while for today’s current crop of facilities to get there, but as new generations of cloud providers and data-intensive enterprises build capacity from scratch, it will be hard for companies to ignore Flash as an option for much longer.
After all, if both servers and networks are running at multiple Gbps levels, does it make sense to saddle them with slow-poke storage?