Optimizing the Hybrid Storage Environment

Arthur Cole
Slide Show

Ten Recommendations for Simplified, Intelligence-Based Storage Management

Flash storage is common in the data center these days, but aside from a few test cases, it is likely to share responsibilities with magnetic disk for a while longer.

This is clearly a benefit to overall data operations, but it is not without some problems. Flash itself is faster and more versatile than disk, but these advantages are diminished without supporting architectures that are optimized for top-tier performance. So unless that data center itself is geared toward all-Flash operation, IT will need to mask the differences between Flash and disk with complex, sometimes latency-inducing, software architecture.

Part of this is bound to incorporate the newest darling on the storage interface level: Non-Volatile Memory Express (NVMe). As explained by Enterprise Storage Forum’s Drew Robb, NVMe is an upgrade to the current PCIe interface specifically optimized for non-volatile memory. As such, it is showing at least a six-fold improvement in both random and sequential read-write compared to SATA for both disk and solid state drives. Still, it is unlikely that anything other than a Greenfield deployment will benefit from end-to-end NVMe support, and even those who employ a gradual upgrade strategy will likely pay a premium for devices that support NVMe-compatible connectors.

Companies like Pure Storage, of course, dismiss the idea of mixed media environments in favor of all-Flash solutions. But even here, the enterprise can still run into trouble with applications and software that is not optimized for high-speed storage. This is why the company is working with leading developers like Microsoft and Citrix to ensure that those who deploy all-Flash solutions like the FlashStack Converged Infrastructure (CI) gain top performance up and down the stack. Under the FlashStack CI Accreditation program, products like SQL Server and XenDesktop can now be easily integrated into the FlashArray platform, which itself has already seen similar integration with Cisco’s UCS blade servers and Nexus switch, as well as vSphere and the Horizon 6 platform.

Without these kinds of industry agreements, however, the enterprise will likely have to deal with the disconnect that most software platforms experience with high-speed Flash. Multiple generations of software are based on “volatile, expensive RAM and persistent, cheap disk,” says GigaSpaces’ DeWayne Filppi, so even in hybrid environments where Flash imitates disks on the file system interface, there will be either a performance cost or increased architectural complexity to accommodate native Flash APIs. Development of a portable Flash API and a universal object mapping layer, or even a new middle/processing tier, may provide the best of both worlds, but these solutions are still on the drawing board.

Unfortunately, the need for storage solutions that are both fast and scalable is upon us right now, says Datanami’s Alex Woodie. With petabyte-scale analytics platforms like Hadoop ready to make sense of Big Data and the Internet of Things, enterprises of all sizes are under the gun to boost their storage capabilities or risk being left behind in the new digital economy. That means either placing the entire data footprint on a Flash footing or employing a hybrid solution and advanced tiering constructs to first analyze the data and then funnel it to the appropriate medium. There are vendors galore who can provide these solutions, but the ultimate decision will fall on the enterprise.

The debate over the efficacy of Flash, disk or hybrid is likely to go on for some time. Indeed, with the amount of Big Data expected by the end of the decade, many are saying that tape is the only truly cost-effective solution.


One thing is clear, however: Storage is no longer about capacity alone. Speed, agility and operational flexibility are equally important, which means Flash will continue to play a prominent role in the emerging data ecosystem. The challenge will be to make sure that the rest of the stack can take full advantage of all that Flash has to offer without pushing earlier forms of storage out the door before its value to the full spectrum of data operations is depleted.

Arthur Cole writes about infrastructure for IT Business Edge. Cole has been covering the high-tech media and computing industries for more than 20 years, having served as editor of TV Technology, Video Technology News, Internet News and Multimedia Weekly. His contributions have appeared in Communications Today and Enterprise Networking Planet and as web content for numerous high-tech clients like TwinStrata, Carpathia and NetMagic.



Add Comment      Leave a comment on this blog post
Jun 22, 2015 5:47 PM PerfMan3 PerfMan3  says:
The biggest question related to the deployment of flash and eventually NVMe is which production workloads truly justify the use (and price) of such high-performance storage technologies. This is where tools like Load Dynamix come in as their workload modeling and load generation products are designed to help storage engineers/architects answer such questions. Solid state storage should only be deployed where it is truly justified from a cost/performance perspective. Reply

Post a comment

 

 

 

 


(Maximum characters: 1200). You have 1200 characters left.

 

null
null

 

Subscribe to our Newsletters

Sign up now and get the best business technology insights direct to your inbox.