Fusion-io Pushes SSD Density Up to 6.4TB per Server

Mike Vizard
Slide Show

Top 10 Storage and Networking Trends for 2014

While it’s possible to run most traditional databases in memory, the rise of Big Data applications has created something of a challenge on solid-state drives (SSDs).

Looking to rise to that challenge, Fusion-io today launched a new line of Atomic Series SSDs based on 20-nanometer MLC NAND memory technology that enables Fusion-io to provide access to up to 6.4TB of data per PCIe-compatible server.

Fusion-io CIO Keith Brown says 20-nanometer technology allows Fusion-io to substantially increase SSD density per server. That makes it possible to allocate access to more SSD storage per application running to handle the rise of virtual machines, which are running more application workloads simultaneously than ever.

In addition, Brown says that Fusion-io makes use of an Adaptive Flashback approach to managing I/O, which gives IT organizations a much higher degree of fault tolerance in the event of an SSD failure.

Although many will debate these days how much “hot data” to store in SSDs that are plugged directly into a server, the fact is that the more data that runs in Flash, the more consistent application performance becomes.

Given the amount of data that is increasingly being accessed these days, a lot of IT organizations simply don’t have the time, patience and skills required to optimize traditional magnetic storage. With the rise of Big Data applications, that task is only becoming more complicated. In contrast, relying on SSDs for primary storage means that when it comes to I/O performance, IT organizations can start to “set and forget it.”

Add Comment      Leave a comment on this blog post
Jun 5, 2014 9:03 AM bbowerman bbowerman  says:
That is 6.4TB per "card/drive", not per "server". Reply

Post a comment





(Maximum characters: 1200). You have 1200 characters left.




Subscribe to our Newsletters

Sign up now and get the best business technology insights direct to your inbox.