SSDs on a PCIe SAN?

Arthur Cole

SSDs are making their way into the enterprise in a big way. At the same time, more and more PCIe solutions are being optimized for flash technology.


With that combination, how long will it be before enterprises start deploying networks of solid-state drives that are completely separate from existing hard-drive-populated SAN and NAS infrastructures?


The move toward networked SSDs is initially happening, not surprisingly, on the HPC level. Companies like One Stop Systems are employing PCIe to link up new configurations of server-based GPU/SSD board combinations delivering upwards of 2.5 TB of on-server memory with 80 Gbps connectivity. Using the newest AMD six-core Opterons, the company is able to deliver upwards of 10 TFLOPS of computing power from a 2 U box.


Internal connectivity is one thing, but creating pooled SSD storage outside of traditional storage networking is another. But even here, there are signs of movement.


NextIO, for one, recently signed a deal with Texas Memory Systems that matches the ICA-2800 PCIe expansion chassis with the RamSan Flash drive, delivering a virtual pool of flash storage capable of hitting 15 million IOPS per rack. That package is aimed at application and database acceleration with full error correction and RAID protection, but it's hard to see why the system could not be easily applied to raw storage needs, particularly for Web-facing, transactional environments.


Then there is Fusion-io, which recently came out with an eight-module version of its ioDrive-the ioDrive Octal-for the federal government. The device, which is built on a PCIe x16 card, provides 800,000 IOPS using 6 GBps connectivity for up to 5 TB of storage. Using an InfiniBand fabric, Fusion-io says you can connect more than 200 Octals together to deliver a whopping 1 TBps.


Will that kind of performance lead people to conclude that SAN/NAS technology is on the way out? Probably not. While there have been limited attempts to pitch PCIe as a full-blown storage network protocol, it would be quite a stretch to imagine enterprises giving up their current plans to consolidate storage on the Ethernet to pursue a PCIe strategy instead.


Nevertheless, there's every reason to believe that as SSDs become more prevalent in the data center, the need to pool that storage will become paramount. That pooling is likely to take place over traditional SAN infrastructure, but how great would it be to have a complementary high-speed network that can be ready at a moment's notice for applications requiring lightning-fast I/O?

Add Comment      Leave a comment on this blog post
Nov 24, 2009 1:53 AM Pounce Pounce  says:

Host bus adapters, network interface cards, graphics cards, all use PCI Express to interface with a server. So I think it's fair to say that PCI Express is more common than say Fiber Channel, which connects via a Fiber Channel HBA to PCI Express to communicate with the host CPUs. Even companies like EMC have PCI Express on their SAN Controllers or storage node motherboards.

Nov 13, 2015 9:56 AM Natasha Natasha  says:
TRENDnet's TEG-PCITXM2 64-bit 10/100/1000Mbps Copper Gigabit PCI Adapter can be a high-bandwidth community adapter of which auto-senses network pace, half/full-duplex mode, and also automotive MDIX method sort. Network directors can certainly keep an eye on performance rural along with Wake-On-Lan. Reply

Post a comment





(Maximum characters: 1200). You have 1200 characters left.



Subscribe to our Newsletters

Sign up now and get the best business technology insights direct to your inbox.