End-to-End FCoE: Still Some Work to Be Done?

Arthur Cole

For a long time, it seemed iSCSI was going to have Fibre Channel for lunch as the storage networking protocol of choice for the vast majority of small and medium-sized enterprises out there. The thinking was that iSCSI, as an Ethernet protocol, was cheaper and easier to deploy and provided service that was good enough, save for the most demanding environments.

But take a look at what happened. Fibre Channel is now available on Ethernet as well, and it seems to be giving iSCSI a run for its money as the preferred solution across a range of enterprise types and sizes. Now, we have the release of an end-to-end Fibre Channel over Ethernet (FCoE) solution from Cisco and NetApp, promising an even more streamlined networking infrastructures coupled with the advanced storage management solutions that Fibre Channel is known for.

But is this really the solution we've been waiting for?

First off, it should be noted that the system, which has already been certified by VMware, goes a long way toward reducing upfront capital costs. Independent consultant David Chernicoff points out that we should see more of these types of turnkey solutions for virtualized environments in the future. And the recent ratification of 40G and 100G Ethernet solutions should provide a platform that is more than capable of handling the increased data loads that those environments should generate.

And there's really not much to complain about in the Cisco/NetApp system. It combines the Cisco Nexus 5000 switch with NetApp's FAS series storage array, both supporting FCoE in the VMware vSphere environment. The package is part of a long-standing collaborative effort between the three companies that brings together technologies, channel partners, system integrators and others to create integrated data center architectures.

The only problem here might be the use of the term "end-to-end," according to The Register's Chris Mellor. He notes that the system does not support FCoE through the core switch, which means the FC packets have to be separated from Ethernet traffic and sent to the array via multiple hops. Ultimately, this will be resolved once Cisco brings FCoE into its FabricPath switching systems that supports the multi-hop TRILL (Transparent Interconnect of Lots of Lines) protocol. But even here, it appears that FabricPath will only be available on the Nexus 7000 switch, not the 5000. You'll still be able to drive FCoE from the server CNA to the array, but it won't be as streamlined an architecture as a TRILL-enabled system would be.

In any endeavor, success is rarely total. And the fact that Fibre Channel now has a means to run the length of an Ethernet environment alongside iSCSI goes a long way toward keeping the protocol at the forefront of enterprise networking. True, that is a step down from its former position as a storage network platform in its own right, but in terms of delivering the kind of features and functionality that enterprises users have come to rely on, it's the best game in town.

Subscribe to our Newsletters

Sign up now and get the best business technology insights direct to your inbox.


Add Comment      Leave a comment on this blog post
Aug 4, 2010 8:39 AM David David  says:

I'm just trying to figure out how you mix FC and Ethernet across the same network, not just server-switch, without making the Ethernet frames sit for a long period while the long FC sequences pass by.   

Aug 5, 2010 2:16 PM Arthur Cole Arthur Cole  says:

OK, I got this back from Emulex:

At the most fundamental level, Fibre Channel sequences are broken into Fibre Channel frames that become encapsulated in Ethernet packets. These packets are about 2KB in size, so slightly larger than a standard Maximum Transmission Unit (MTU) Ethernet packet, but smaller than a 9000 byte jumbo frame packet. Network controllers and switches interleave packets, so even if there is a large SCSI transaction of say a megabyte, the resultant Fibre Channel traffic is only going on the wire 2KB at a time, and other traffic is interleaved on a packet by packet basis.

To ensure fairness, the Data Center Bridging (DCB) standards include Enhanced Transmission Selection (ETS) IEEE 802.1Qaz, which allows network administrators to define priority groups, and assign minimum bandwidth to each group. For example, a network traffic group could be assigned 4Gb/s while an FCoE group is assigned 2Gb/s, ensuring that each group would get that minimum bandwidth. In simplistic terms this tells the network controllers and switches how much FCoE traffic to interleave versus standard network traffic when there is contention for the link.

Hope that helps

Aug 5, 2010 6:27 PM Arthur Cole Arthur Cole  says:

Hi David,

Good question. Let me see if I can get an FC expert to handle that one.


Post a comment





(Maximum characters: 1200). You have 1200 characters left.




Subscribe Daily Edge Newsletters

Sign up now and get the best business technology insights direct to your inbox.

Subscribe Daily Edge Newsletters

Sign up now and get the best business technology insights direct to your inbox.