SSDs: A New Era in Throughput

Arthur Cole

It was indeed fortuitous that enterprise-class SSDs came along just in time to accommodate IT's need for incredibly high throughput brought on by virtualization and the cloud. But as they say in show business, you ain't seen nothin' yet.

Indications are that the I/O possibilities of solid-state technologies are only just beginning, and an era of lightning-fast data transmission is in the making. Ironic, isn't it, that storage was always seen as the weak link in the data-networking chain, but now seems poised to meet, or dare I say exceed, the capability of many networking devices?

Our first bit of evidence comes from PMC-Sierra this week. That company has introduced a new RAID-adapter system, the MaxRAID BR5225-80, that uses the PCI-Express bus to link up to eight SAS/SATA ports for a combined throughput of 300,000 IOPS. Multiple adapters working in tandem through a multi-thread process and a new RAID-on-chip controller, will bump that number to more than a million IOPS, the company said. PMC-Sierra is already working an integration deal with IBM, most likely putting the system on the new eX5 line.

This comes at a time when PMC-Sierra is on the verge of taking over Adaptec's channel storage unit, which would include the company's RAID systems and SSD caching technology. That would provide access to the new MaxIQ storage controllers and 64GB cache performance system that offers I/O data analysis as well as configuration management and monitoring for rapid scale-up operations.


There is also a lot of action taking place on the storage array itself. Nimbus Data Systems is out with the S-class array that delivers an impressive 500,000 IOPS and 40 G throughput for as little as $25,000. It also has the ability to scale up to 100 TB (or more with inline dedupe and compression) and 1.3 million IOPS. The system is outfitted with Micron EMLC flash units, which the company says offer greater reliability than standard MLC, and packs 24 blades in a 2U form factor connected via 6G SAS, quad 10 GbE or auto-negotiating single GbE ports, plus a dedicated WAN port.

Not all forms of data require the fastest of the fast in terms of networking and storage technology. But as business activity picks up, the need for high-speed environments will likely gain as well. And with SSDs in the mix, storage is not the slowest runner on the track anymore.

Subscribe to our Newsletters

Sign up now and get the best business technology insights direct to your inbox.


Add Comment      Leave a comment on this blog post
May 19, 2010 5:32 PM Dave Dave  says:

What about throughput as measured in MB/s?  How do you see these devices comparing with high speed FC/SAS drives?

May 20, 2010 9:04 AM Arthur Cole Arthur Cole  says: in response to Dave

It's tough to compare IOPS vs. Mbps considering one is a measurement of i/o actions taking place and the other is a measurement of data being transferred.

In general, however, I think you'll find SSDs will outpace any form of hard disk technology, regardless of protocol.

Jul 9, 2010 6:25 PM gunslinnger gunslinnger  says: in response to Dave

If it can offer 4x 10g interfaces that is equivelent to 4,000 MB/s of available throughput.

The bigger question is can it actually push that much throughput?? Normally sequential operations like backup's and streaming video are the apps that require this much bandwidth. Most systems would be constrained by what the magnetic media is able to handle. In this case with all sSD drives aggregated I would suspect that the bottleneck would be somewhere either in the processor or on the fron-side bus leading to the 10G Ethernet NIC's...how much bandwidth can the bus handle? I don't know.

But I would agree Arthur. Whatever the limitation of the Proc or the Bus or the Protocol your application won't be waiting on the Disk.

Not to mention that $25,000 for 500,000 IOPS = .05 cents per IOP That is an absolutely fantastical number in the storage industry where the typical IOP cost is $3 - $6 per IOP and even more in large Enterprise arrays.

All I can say is Thank God for for Gordon Moore.


Post a comment





(Maximum characters: 1200). You have 1200 characters left.




Subscribe Daily Edge Newsletters

Sign up now and get the best business technology insights direct to your inbox.

Subscribe Daily Edge Newsletters

Sign up now and get the best business technology insights direct to your inbox.