And Now, Fibre Channel over Ethernet

Arthur Cole

Straight out of the "Why Didn't We Think of This Before" category comes word that a group of leading storage vendors are petitioning the American National Standards Institute for a means to run Fibre Channel traffic over Ethernet. The so-called FCoE standard would likely breathe new life into Fibre Channel by giving entrenched FC networks a means to leverage new 10 GbE technologies now hitting the market.

 

Reaction was mostly positive in the first few hours after the announcement. Enterprise Strategy Group's Brian Garrett voiced one of the major advantages to FCoE, telling internetnews.com that it would end the wasteful practice of having to maintain and manage two separate networks for general and enterprise-class data.

 

In fact, the move might make some IT execs think twice about launching iSCSI. iSCSI, after all, relies on the TCP/IP layer to process data, while FCoE would use the much simpler Ethernet.

 

According to news.com, you should be able to use existing SAN management tools for FCoE, although your servers will need new adapter cards. And since a final standard probably won't appear for another year and a half, it's unlikely that the initial 1-GbE adapters will support it as well.

 

One side note: One of the leading vendors in the FCoE drive is Brocade. It was only earlier this week that Brocade CEO Michael Klayko, in announcing new 10 GbE and iSCSI SANs, hinted that a plan for an Ethernet version of the company's Fibre Channel line might emerge later this year.


 

He should have said later this week.



More from Our Network
Add Comment      Leave a comment on this blog post
Apr 24, 2007 2:37 AM Julian Satran Julian Satran  says:
What a piece of nostalgia :-)Around 1997 when a team at IBM Research (Haifa and Almaden) started looking at connecting storage to servers using the "regular network" (the ubiquitous LAN) we considered many alternatives (another team even had a look at ATM - still a computer network candidate at the time).I won't get you over all of our rationale (and we went over some of them again at the end of 1999 with a team from CISCO before we convened the first IETF BOF in 2000 at Adelaide that resulted in iSCSI and all the rest) but some of the reasons we choose to drop Fiber Channel over raw Ethernet where multiple: Fiber Channel Protocol (SCSI over Fiber Channel Link) is "mildly" effective because: it implements endpoints in a dedicated engine (Offload) it has no transport layer (recovery is done at the application layer under the assumption that the error rate will be very low) the network is limited in physical span and logical span (number of switches) flow-control/congestion control is achieved with a mechanism adequate for a limited span network (credits).The packet loss rate is almost nil and that allows FCP to avoid using a transport (end-to-end) layer FCP she switches are simple (addresses are local and the memory requirements cam be limited through the credit mechanism) However FCP endpoints are inherently costlier than simple NICs the cost argument (initiators are more expensive) The credit mechanisms is highly unstable for large networks (check switch vendors planning docs for the network diameter limits) the scaling argument The assumption of low losses due to errors might radically change when moving from 1 to 10 Gb/s the scaling argument Ethernet has no credit mechanism and any mechanism with a similar effect increases the end point cost.Building a transport layer in the protocol stack has always been the preferred choice of the networking community the community argument The "performance penalty" of a complete protocol stack has always been overstated (and overrated).Advances in protocol stack implementation and finer tuning of the congestion control mechanisms make conventional TCP/IP performing well even at 10 Gb/s and over.Moreover the multicore processors that become dominant on the computing scene have enough compute cycles available to make any "offloading" possible as a mere code restructuring exercise (see the stack reports from Intel, IBM etc.) Building on a complete stack makes available a wealth of operational and management mechanisms built over the years by the networking community (routing, provisioning, security, service location etc.) the community argument Higher level storage access over an IP network is widely available and having both block and file served over the same connection with the same support and management structure is compelling the community argument Highly efficient networks are easy to build over IP with optimal (shortest path) routing while Layer 2 networks use bridging and are limited by the logical tree structure that bridges must follow.The effort to combine routers and bridges (rbridges) is promising to change that but it will take some time to finalize (and we don't know exactly how it will operate).Untill then the scale of Layer 2 network is going to seriously limited the scaling argumentAs a side argument a performance comparison made in 1998 showed SCSI over TCP (a predecessor of the later iSCSI) to perform better than FCP at 1Gbs for block sizes typical for OLTP (4-8KB). Reply
Apr 24, 2007 2:37 AM Julian Satran Julian Satran  says:
That was what convinced us to take the path that lead to iSCSI and we used plain vanilla x86 servers with plain-vanilla NICs and Linux (with similar measurements conducted on Windows).The networking and storage community acknowledged those arguments and developed iSCSI and the companion protocols for service discovery, boot etc.The community also acknowledged the need to support existing infrastructure and extend it ina reasonable fashion and developed 2 protocols iFCP (to support hosts with FCP drivers and IP connections to connect to storage by a simple conversion from FCP to TCP packets) FCPIP to extend the reach of FCP through IP (connects FCP islands through TCP links).Both have been implemented and their foundation is solid.The current attempt of developing a "new-age" FCP over an Ethernet link is going against most of the arguments that have given us iSCSI etc.It ignores the networking layering practice, build an application protocol directly above a link and thus limits scaling, mandates elements at the link layer and application layer that make applications more expensive and leaves aside the whole "ecosystem" that accompanies TCP/IP (and not Ethernet).In some related effort (and at a point also when developing iSCSI) we considered also moving away from SCSI (like some "no standardized" but popular in some circles software did e.g., NBP) but decided against.SCSI is a mature and well understood access architecture for block storage and is implemented by many device vendors.Moving away from it would not have been justified at the time. Reply

Post a comment

 

 

 

 


(Maximum characters: 1200). You have 1200 characters left.

 

null
null

 

Subscribe to our Newsletters

Sign up now and get the best business technology insights direct to your inbox.