More

    Marvell Pushes Server-Side Storage Performance Envelope

    Everyone knows that when it comes to reading data, a solid-state drive is lightning fast. But when it comes to writing data within, for example, a transaction-processing application, performance drops off considerably. Marvell Technology today announced that it has cracked the code on that particular problem.

    To be shown publicly for the first time next week at the Consumer Electronics Show, today the company unveiled Marvell DragonFly NDRIVE, which combines NVRAM with solid-state disks (SSDs) to create a storage system that plugs into a PCIe slot on a server. According to Shawn Kung, Marvell director of product marketing, what makes Marvell DragonFly NDRIVE is that all the write operations are handled by the NVRAM cache memory to deliver sustained performance that exceeds 200,000 4K random IOPS, 3GBps throughput and sub-10us average latency.

    The Marvell DragonFly NDRIVE provides up to 1.5 TB of useable SSD storage space along with access to 8GB of cache memory. Kung says that as a new generation of applications find their way into the cloud, it’s become more than apparent that more application logic needs to run in memory to make up for performance bottlenecks that stem from hard disks and network traffic.

    The challenge, says Kung, has been developing the algorithms that manage where operations are performed in a way that doesn’t wind up having an adverse impact on performance. For its part, Marvell is probably fairly anxious to prove its research and development mettle after a ruling that found the company violated some patents owned by Carnegie-Mellon University worth roughly $1.16 billion. Marvell plans to appeal.

    Clearly, one of the simplest and easiest things that an IT organization struggling with performance issues can do is throw more memory at the problem in the form of PCIe-based storage systems. Of course, it would be better if developers didn’t create applications that almost inevitably create a bottleneck. But then again, the history of enterprise computing can be defined by the number of times bottlenecks have been shifted between servers, storage and networks, so maybe the time has finally come to address the issue on the server once and for all given that’s where the application actually resides.

    Mike Vizard
    Mike Vizard
    Michael Vizard is a seasoned IT journalist, with nearly 30 years of experience writing and editing about enterprise IT issues. He is a contributor to publications including Programmableweb, IT Business Edge, CIOinsight and UBM Tech. He formerly was editorial director for Ziff-Davis Enterprise, where he launched the company’s custom content division, and has also served as editor in chief for CRN and InfoWorld. He also has held editorial positions at PC Week, Computerworld and Digital Review.

    Latest Articles