Now that solid-state disks are making their way into the enterprise, one of the chief deployment issues is network connectivity. While the obvious answer would be to simply mount the devices in a RAID and link to servers through existing SANs, a number of solutions are in the offing that bypasses that route using the PCI-Express bus, and reporting remarkable speed benefits in the bargain.
After all, SSDs' claim to fame is speed, and even though 10 Gb Ethernet or 8 Gb Fibre Channel provide a very wide pipe, PCIe offers up to 32 Gbps without suffering the additional latency of SAS and SATA protocols, a particular advantage with heavy loads. So why not go the direct route instead, where some groups are reporting more than a million IOPS and close to 3 GBps of sustained bandwidth?
Take Dolphin ICS, for example. The company recently released the StorExpress system that uses a PCIe design that can achieve 270,000 read/write IOPS using up to 4KB blocks and close to 2,800 MBps of sustained bandwidth. The system supports up to 4 TB in a single-rack enclosure and can be located up to 300 meters from the server farm using 8x cables.
Even when merging PCIe and protocols like SATA, the benefit is substantial. Over at Marvell Semiconductor, the company is close to releasing the 88SE9480, a quad-channel PCIe-to-SATA controller that delivers 2 Gbps of sustained throughput and 200,000 IOPS when combined with multi-level cell (MLC) NAND Flash. The device features wear-leveling technology and error-correcting code (ECC) hardware.
Marvell is also working with Smart Modular Technologies to develop a full solid-state storage system built around 8x PCIe. The XceedIOPS system is reported to deliver 140,000 random small block IOPS for as little as 25 watts. The system can be configured in two, four or eight storage nodes of 24 or 50 GB each, for a maximum capacity of 400 GB of single-level cell (SLC) Flash.
The leader so far, though, would have to be Fusion-io, which is reporting more than 1 million IOPS using its ioDrive in conjunction with IBM's SAN Volume Controller (SVC) as part of the recent Project QuickSilver. Compare that to Texas Memory Systems' recent offering of its RamSan-500 drive, which does offer a link to legacy Fibre Channel infrastructures but can only hit a maximum of 100,000 IOPS or so and about 2 GBps of sustained bandwidth.
As I said, speed is what it's all about when it comes to SSDs. Capacity is nice and network connectivity sure comes in handy. But for transactional databases and Web serving, SSDs' high throughput is tough to beat. The faster that data can get into and out of storage, the better the return on the SSD investment.