More

    Implications of the All-Flash Data Center

    Slide Show

    Ten Recommendations for Simplified, Intelligence-Based Storage Management

    From a performance perspective, the all-Flash data center certainly makes a lot of sense. In an age when the movement of data from place to place is more important than the amount of data that can be stored or processed in any given location, high I/O in the storage array should be a top priority.

    But while no one disputes the efficacy of Flash over disk and tape when it comes to speed, the question remains: Does the all-Flash data center still make sense for the enterprise? And if so, what impact will this have on other systems and architectures up and down the stack?

    HP recently pushed the envelope on the all-Flash data center a little further with a new line-up of arrays and services for the 3PAR StoreServ portfolio. The set-up is said to improve performance, lower the physical footprint of storage and reduce cost to about $1.50 per usable GB, which is about 25 percent less than current equivalent solutions. The company is already reporting workload performance of 3.2 million IOPS with sub-millisecond latency among its Flash drives, and the 3PAR family’s Thin Express ASIC provides a high degree of data resiliency between the StoreServ array and the ProLiant server to reduce transmission errors.

    The emerging dominance of Flash across enterprise production environments is critical to relieve the gating factor that most arrays place on increasingly fast-paced data operations, says Wikibon’s Bert Latamore. But the impact doesn’t stop there. Many applications are written with the slowness of traditional storage infrastructure in mind, so to really take advantage of all that Flash has to offer, the enterprise will need to not only upgrade or deploy new applications, but redeploy existing storage controllers in the case of simple disk-swapping to remove capabilities like memory caching and other traffic-handling functions. It also means that Flash arrays will be moved closer to the server farm to reduce latency even further and advanced architectures like Flash as Memory Extension (FaME) will come into play.

    Flash has made tremendous gains in recent years, but as Timothy Prickett Morgan asks on ThePlatform.net, can it maintain the momentum? Specifically, will Moore’s Law allow Flash to scale enough to meet emerging virtual and cloud requirements? Pure Storage is confident it will. Its new platinum FlashArray //m series arrays are aimed at the sweet spot for both capacity and performance, so they can expand linearly with CPU deployments and thus provide steady support for increasingly expanding workloads. Over the past three years, the company quadrupled the performance of its FlashArray and increased capacity 12-fold, and executives say there is enough headroom in the architecture to double those numbers in each of the next two years to the point where a single rack will handle multiple hundreds of terabytes.

    Data Center

    This is certainly impressive, but could it be that even newer technologies are starting to paint the first rosy hue of the eventual sunset of enterprise-grade Flash? ZDNet reports that a company called Nantero recently came out of a 14-year stealth period with a technology that layers carbon nanotubes on a standard DRAM architecture to provide a solution that the company says is faster, more durable and more energy efficient than current Flash designs. The company is pursuing an ARM-style licensing strategy that would leave actual chip production to existing manufacturers. The device has been shown to work in configurations as small as 5nm – vastly smaller than Flash – and is calculated to be non-volatile for more than 1,000 years at 85 degrees Celsius. For these and other reasons, the company says it can replace both Flash and DRAM to produce a single memory/storage tier that requires neither wear leveling or steady refresh cycles.

    Flash is most certainly the go-to storage medium for the advanced data architectures emerging in the data center and on the cloud. It may not be the optimal solution for all applications, namely long-term storage and archiving where speed is not a critical factor, but in today’s development and production environments, users want their data quickly and completely intact.

    But those who would argue that the all-Flash data center is the end-all and be-all of enterprise storage infrastructure are forgetting the fact that data requirements are evolving along a wide variety of parallel and sometimes contradictory paths. So it is highly unlikely that any one solution will provide optimal support for all use cases.

    Fortunately, earlier means of storage are not going anywhere anytime soon, so the enterprise still has wide latitude when it comes to crafting physical infrastructure around both existing and emerging data requirements.

    Arthur Cole writes about infrastructure for IT Business Edge. Cole has been covering the high-tech media and computing industries for more than 20 years, having served as editor of TV Technology, Video Technology News, Internet News and Multimedia Weekly. His contributions have appeared in Communications Today and Enterprise Networking Planet and as web content for numerous high-tech clients like TwinStrata, Carpathia and NetMagic.

    Arthur Cole
    Arthur Cole
    With more than 20 years of experience in technology journalism, Arthur has written on the rise of everything from the first digital video editing platforms to virtualization, advanced cloud architectures and the Internet of Things. He is a regular contributor to IT Business Edge and Enterprise Networking Planet and provides blog posts and other web content to numerous company web sites in the high-tech and data communications industries.

    Get the Free Newsletter!

    Subscribe to Daily Tech Insider for top news, trends, and analysis.

    Latest Articles