Fast Servers Need Faster Storage


I've spent a lot of time talking about storage speed this week, particularly when it comes to building cloud-based architectures. But the simple fact remains that there is still quite a gap between how fast data can move in and out of storage and how fast it can be manipulated by modern processors.

The good news is that servers that already run blazingly fast are about to get a lot faster, thanks to advanced microprocessor architectures like the new Xeon 5500s. Add to that the virtualization capabilities of VMware, Citrix and Microsoft, and the data processing side of the enterprise is heading for warp speed.

That would be all well and good, if not for the fact that all that expensive new equipment can never live up to its full potential without a dramatic increase in storage throughput. Advances in network storage protocols like 10 GbE and 8G Fibre Channel, as well as the speed boost from solid state disks, can barely keep up with the rapid rise of Nehalem-powered virtual machines, supplemented by a raft of cloud-based applications all clamoring for data.

That's not to say that there hasn't been any progress in this area lately. Some approaches involve major re-engineering for enterprise networks, such as Cisco's Unified Computing System. The package is packed with memory modules and 10 GbE interconnects from the likes of EMC and NetApp, all designed to keep data as readily available as possible. The package also offers a heavy dose of Data Center Ethernet and FCoE fabric extenders to help cut down on the number of switches and the management hassles that go with them.

For those not willing to undertake an entirely new architecture, however, there are simpler means of obtaining greater throughput. One of them is a new storage appliance from WhipTail Technologies that provides an instant solid-state storage array for time-critical data, such as online transactions. The company says it can replace two or more racks of Fibre Channel storage and more than 30 separate arrays. It even has a proprietary sequential write system that boosts SSD's poor write performance and extends the overall life of the disk.

Other simple solutions can be found on the new generations of servers themselves. Hewlett-Packard, for example, had the foresight to boost the I/O of its new Nehalem-class Proliant G6 servers by building in support for 6 Gbps SAS drives. This more than doubles the throughput of current 3G systems even while 6G SATA solutions are waiting to hit the channel.

Whenever a major processing advancement like the Nehalem comes out, there is a temptation to get them up and running as quickly as possible to feed the data beast. That's a good strategy as far as it goes, but without a similar upgrade to your I/O infrastructure, your new servers simply won't have enough data to truly take advantage of all that power.

It's kind of like dropping big bills on a new sports car and then getting stuck in the mud.