In the drive for greater network performance, most of the attention goes toward boosting throughput of enterprise infrastructure. 8-gig, 10-gig, 20-gig and beyond are generally seen as the ways to a more nimble future.
This is true when it comes to pushing raw data from place to place, but once you get into the area of application latency, there's a whole lot more to consider than just bandwidth.
Storage is one area that introduces latency, particularly as multithreaded applications become more popular, according to analyst George Crump. Even the faster SSD drives coming out can still bog things down if you plan to manage them with standard software. A standalone system like those from Violin Memory and Texas Memory Systems offers some improvement, although they don't fit easily into existing drive bays.
On the server side, virtualization has led to too many virtual machines vying for too few I/O resources, which is why aggregation should be a key component to any consolidation project, according to Shai Fultheim, CEO of ScaleMP. Aggregation combines all those machines into a single logical system, at least as far as the network is concerned. Reducing the number of times individual processors access the backplane, or cutting the latency of each access, will have a tremendous impact on overall efficiency.
It's also important to keep in mind that different applications will have varying latency requirements, says analyst Paul McGuckin on IT World. He recommends establishing three to five tiers of IT performance, and then rating applications according to their latency requirements. If you are bent on creating a state-of-the-art network guaranteeing real-time performance for all applications, you're probably wasting money.
To be sure, broader connectivity throughout the data center will go a long way toward speeding things up, so by all means make the jump to 10 GbE or even 20 Gbps Infiniband if you can. But recognize that wider pipes are only half the battle.