In Pursuit of Faster Storage

Arthur Cole

One of the more overlooked aspects of high-speed network fabrics is that even with throughputs of 8 Gbps, 10 Gbps and beyond, your speed is still limited by the I/O capability of your storage media.


And when it comes to magnetic media in particular, the continued focus on increased capacity overshadows the fact that transfer speeds are not keeping pace with what's happening on the network.


This is partly the reason for the growing interest in Flash technology, which is rapidly propelling the solid state drive (SSD) from the realm of laptops and PDAs to enterprise-class servers and storage systems.


STEC is quickly emerging as a leader in SSDs for the enterprise, having partnered with EMC to supply Flash drives for the Symmetrix and CLARiiON systems. The company claims it can deliver 30 times more IOPS than Fibre Channel-based hard drives, and can boost performance even further by adding its SSDs onto the server as cache memory between the local DRAM and the hard disk.


Imation is stepping up to the plate as well, introducing the new PRO 7000 series that uses a Parallel ATA (PATA) interface to virtually eliminate seek time and deliver upwards of 83,000 IOPS with sustained read/write speeds of 130/120 MBps, a key development considering write speeds have traditionally lagged much farther behind read speeds.


However, nobody is pushing the throughput envelope quite as much as Texas Memory Systems, which recently unveiled the RamSan-440 that maintains up to 600,000 IOPS across eight 4 Gbps Fibre Channel ports. The device is available with up to 512 GB of DDR2 RAM and provides a sustained read/write bandwidth of 4 GBps.


As advanced as these solutions might be, they might soon be out-classed by a new generation of memory appliances that are pushing throughput to unheard of levels. Violin Memory is garnering rave reviews for its Violin 1010 platform, said to be able to push 1 million IOPS over a single interface, with the possibility of 3 million IOPS already in site. The system has been demonstrated with all major Linux distribution, as well as Windows 32 and 64 and OpenSolaris, using standard PCIe bus technology.


Virtualization is likely to be the main cause of I/O bottlenecks in the coming years, and there's no shortage of network solutions seeking to overcome the problem. But networks are only as fast as their slowest components, which means the speed of your storage is quickly becoming just as important as the amount.

Add Comment      Leave a comment on this blog post
Aug 12, 2008 8:25 AM Matt Simmons Matt Simmons  says:
While I agree 100% that these new technologies are going to lead to much faster access times, when you're talking about 8G FC or 10G ethernet, the transfer speed of the network is still the bottleneck when you get the right number of spindles. Nice article, I'm going to have to check out these technologies. Thanks! Reply

Post a comment





(Maximum characters: 1200). You have 1200 characters left.



Subscribe to our Newsletters

Sign up now and get the best business technology insights direct to your inbox.