Network Problems? More Bandwidth Might Not Be the Answer

Arthur Cole

The quest for improved storage networking is usually couched in terms of increased bandwidth. However, most networking specialists will readily admit that bandwidth is only one aspect of an increasingly complex data environment, and simply adding wider pipes throughout the enterprise will not improve overall performance on its own.

This is becoming increasingly obvious in highly virtualized environments. A recent survey by the Enterprise Strategy Group revealed a number of key issues inhibiting broader virtual deployment, with the top hurdle being inadequate integration between server, storage, networking and virtual resources. In other words, the limiting factor is not the inability to get data from one place to another in a timely fashion, but how to handle it once it gets there.

This shouldn't come as a big shock to anyone, according to UK storage specialist Archie Hendryx, considering most SAN fabrics are already over-provisioned by a factor of five or more with just 4 G Fibre Channel, let alone the 8 G and 16 G formats hitting the channel. In fact, higher bandwidth often leads to greater storage networking problems due to the ever-shrinking bit period that is more susceptible to faults in the underlying physical network. Sure, your data may move faster over the wire, but there will be more, and more severe, traffic jams at every pothole.

In fact, the hidden truth about data networking is that unless you are in an extremely high-traffic environment or are dealing with bandwidth-heavy applications like streaming media or high-volume backup operations, even relatively low 1 and 2 Gbps infrastructure should suffice, according to network consultant Adam Jones. Low amounts of data can travel just as quickly down a 1 G pipe as 2 G because the speeds are the same, only the amount of data handled at one time is increased.

None of this is to suggest that investment in higher bandwidth infrastructure is a waste of time. Just make sure that the network latency you are experiencing is, in fact, due to bandwidth constraints and not CRC errors, problems in the physical infrastructure or some other disconnect.

In the end, you could end up with a much more robust network and quite a bit of cash in your wallet.

Subscribe to our Newsletters

Sign up now and get the best business technology insights direct to your inbox.


Add Comment      Leave a comment on this blog post
Oct 31, 2010 9:01 AM Congested Storage Network Congested Storage Network  says:

Good article with a valid point that gets lost in the race to quickly upgrade. VMware has also bought with it further abstraction of the SAN layer leading many to overprovision their SAN. As you also rightly mentioned I don't see VMware solving this problem, certainly not with SIOC or the many virtualised IO companies popping up which can't look into the SAN switches.

Also picking up on the article you mentioned from Archie Hendryx, jitters and poor cabling can lead to real problems. As you rightly mentioned CRC errors, link loss, Code Violations should be looked at before assuming its a bandwidth problem. As I've started noticing from the web recently, from what I can tell Virtual Instruments (the spin off company from long time FC analysts Finisar),  is the only comapny that that is capable of seeing such issues in real time and solving such problems.

Would you not agree that if the storage vendors worked with such a tool you should upgrade as you would  spread the throughput correctly allowing lesser fan out ratios and giving VMware, backup and business critical apps the throughput they need without overprovisioning? In such a situation, more bandwidth would be the answer!


Post a comment





(Maximum characters: 1200). You have 1200 characters left.




Subscribe Daily Edge Newsletters

Sign up now and get the best business technology insights direct to your inbox.

Subscribe Daily Edge Newsletters

Sign up now and get the best business technology insights direct to your inbox.