8G Fibre Channel Yes, but Then What?


The march toward 8 Gbps Fibre Channel SANs continues, bringing in some much needed bandwidth for enterprises struggling to maintain storage connectivity for increasing numbers of virtual machines. The question remains, though, whether it makes more sense to continue investing in Fibre Channel hardware or simply layer existing infrastructure on top of even wider-bandwidth solutions.

There's no doubt that there will be a plethora of 8G Fibre Channel hardware to choose from in the coming year -- much of it designed to provide an easy upgrade for mission-critical systems. IBM made headlines this week with what it says is the first end-to-end 8G FC infrastructure. The company has bundled new FC daughter cards from QLogic onto its BladeCenter platform, overcoming the bandwidth constraints that would otherwise existing in the server chassis backplane. In aggregate, IBM claims it can hit speeds of 40 GBps.

Earlier this month, Infortrend bumped up its EonStor RAID subsystem with a new 8G FC engine built on a new generation of modular ASICs. The 16-drive module can scale up to 112-drive configurations offering two 8G ports per controller for a top read/write performance of 2,800/870 MBps. The system was designed in coordination with ATTO Technologies, which contributed its 8G Celerity HBAs to the mix.

It's hard to argue against higher-bandwidth Fibre Channel at the moment. Dell'Oro Group says the overall market for FC switches went from $464 million in the fourth quarter of 2007 to $483 million for the same period in 2008, an impressive gain considering general IT spending tanked late last year along with the rest of the economy. Going forward, the group says the only thing that could diminish sales will be continued layoffs depleting the ranks of enterprise users.

Even then, the rise of virtualization creates two problems that can be uniquely solved by 8G Fibre Channel, according to this article from Storage Switzerland's George Crump. The first is the need for additional HBAs for newly virtualized servers, the second is the ability to share N_Ports through the N_Port ID Virtualization (NPIV) format. In both cases, Fibre Channel greatly reduces that amount of networking infrastructure needed to maintain adequate data flow to and from virtual resources. As virtualization, and by extension cloud computing, gain momentum, wider pipes will be needed to keep enterprises from drowning in their own productivity.

All this is true in the short term, but I still can't help wondering why enterprises would continue investing in higher-bandwidth Fibre Channel technology when they can get even wider pipes through 10 GbE or 20 Gb Infiniband technology -- unifying all their networks, including legacy Fibre Channel infrastructure, onto an even broader fabric.

The Fibre Channel community has only just begun talking about a 16 G platform. Infiniband is already well on the way toward 40 Gb, while Ethernet is aiming for 100 G. Fibre Channel's next big move? Finalizing plans to layer on top of the Ethernet.