Marvell Pushes Server-Side Storage Performance Envelope

Michael Vizard

Everyone knows that when it comes to reading data, a solid-state drive is lightning fast. But when it comes to writing data within, for example, a transaction-processing application, performance drops off considerably. Marvell Technology today announced that it has cracked the code on that particular problem.

To be shown publicly for the first time next week at the Consumer Electronics Show, today the company unveiled Marvell DragonFly NDRIVE, which combines NVRAM with solid-state disks (SSDs) to create a storage system that plugs into a PCIe slot on a server. According to Shawn Kung, Marvell director of product marketing, what makes Marvell DragonFly NDRIVE is that all the write operations are handled by the NVRAM cache memory to deliver sustained performance that exceeds 200,000 4K random IOPS, 3GBps throughput and sub-10us average latency.

The Marvell DragonFly NDRIVE provides up to 1.5 TB of useable SSD storage space along with access to 8GB of cache memory. Kung says that as a new generation of applications find their way into the cloud, it’s become more than apparent that more application logic needs to run in memory to make up for performance bottlenecks that stem from hard disks and network traffic.

The challenge, says Kung, has been developing the algorithms that manage where operations are performed in a way that doesn’t wind up having an adverse impact on performance. For its part, Marvell is probably fairly anxious to prove its research and development mettle after a ruling that found the company violated some patents owned by Carnegie-Mellon University worth roughly $1.16 billion. Marvell plans to appeal.

Clearly, one of the simplest and easiest things that an IT organization struggling with performance issues can do is throw more memory at the problem in the form of PCIe-based storage systems. Of course, it would be better if developers didn’t create applications that almost inevitably create a bottleneck. But then again, the history of enterprise computing can be defined by the number of times bottlenecks have been shifted between servers, storage and networks, so maybe the time has finally come to address the issue on the server once and for all given that’s where the application actually resides.

Add Comment      Leave a comment on this blog post

Post a comment





(Maximum characters: 1200). You have 1200 characters left.




Subscribe to our Newsletters

Sign up now and get the best business technology insights direct to your inbox.