For anyone still weighing the pros and cons of consolidating network infrastructure onto 10 GbE fabrics, there's comfort in knowing that the Ethernet upgrade path is already well mapped out for the inevitable rise of data workloads.
Already, many top-tier enterprise and HPC organizations are pushing toward 40 and 100 GbE networks despite the cost and complexity involved, which in turn is spurring a flurry of activity to shore up standards and devise working platforms before mainstream IT catches up.
The IEEE is taking a lead role in this effort, having just launched a new study group tasked with working out the kinks in 100 GbE technology. The immediate goal is a set of three standards governing both local and wide area connectivity, as well as a 4x25 optical interface that would provide higher densities and lower costs than current 10x10 systems. Baseline specs are expected by the end of next year with finalization expected a year later. The group is headed by Dan Dove, senior director of technology at Applied Micro.
In the meantime, 100 GbE plants are gathering steam using proprietary systems. Brocade, for example, recently provided the core of an end-to-end 100 GbE infrastructure at the Howard Hughes Medical Institute's Janelia Farm Research Campus in Ashburn, Va. The setup involved multiple MLXe routers providing 56 100 Gb ports tied together under the company's Multi-Chassis Trunk (MCT) framework. The aggregate layer consists of dual MLXe-32 routers providing more than 2,400 single GbE and several subsets of 10 GbE ports. At the same time, IP communications and wireless LAN capabilities are supported through FCX Power over Ethernet Plus (PoE+) switches.
Meanwhile, Juniper is working closely with the Department of Energy's Oak Ridge National Laboratory on 100 GbE connectivity. The pair recently demoed a wide area virtual private network at SC11 using Juniper's MX 3D edge router and Myricon dual 10 GbE network cards. Each host on the network tapped into the network via eight 10 GbE links that use the Common Communication Interface to support single-thread 80 Gbps connectivity.
Part of the urgency in establishing 100 Gbps Ethernet comes from the rapidly changing nature of IT networking. In an age where higher-capacity on- and near-server storage options are becoming readily available, the Ethernet community needs to push the throughput in order to stay relevant. As John D'Ambrosia, chairman of the Ethernet Alliance pointed out recently, advances in PCIe-on-motherboard technology make it highly likely that HPC circles will soon be dealing with 40- and 100 Gbps servers.
HPC technology, of course, has a way of trickling down to mainstream data environments in due course, so it's fortunate for networking professionals that much of the development legwork is going on now. Undoubtedly, enterprise applications and data requirements will drive the need for additional standards and integration development as deployments gain speed, but at least the core infrastructure should be well-defined by then - just in time for data rates to climb even higher.