Enterprise networking is in a constant struggle to keep up with data loads. Every time there is a breakthrough in throughput, software developers quickly devise a scheme to exploit it and then push the envelope just a bit further, resulting in the same old bottlenecks.
Lately, though, that dance has been pushed to an extreme. With the rise of virtualization and the cloud, not only are we seeing applications and data shuttled around the Net, but entire operating environments and system platforms.
The networking industry has responded with advances like 10 GbE and WAN optimization, but it's becoming clear that even these developments will hit their practical limits relatively quickly, particularly as network convergence adds things like voice and hi-def video to the mix.
The obvious solution to this problem is to simply increase bandwidth. And indeed, that is the primary goal of the new 40G/100G Ethernet standard that was recently ratified by the IEEE. But as you delve deeper into the mechanics of advanced networking, it becomes clear that the pursuit of higher throughput cuts across many paths. Quite often, the way you use available bandwidth can have an equal effect on application performance as the size of the overall pipeline.
Indeed, throughput is only as good as the narrowest link, which is why the networking community is rapidly expanding its capabilities through multicore and other technologies.
"The consolidation of computing is dramatically increasing the throughput required for (high-performance) applications," says Steve Klinger, director of Cavium Networks' embedded processor group, which recently came out with a new line of OCETON II chips that doubles the number of RISC processors to 32. "With the increase to 32 cores and the corresponding increase in our application hardware acceleration ... these processors are able to scale up to 40 Gbps throughput within a single chip."
Once you start talking multicore, however, you run into the same kind of parallel processing issues that can affect application performance on the PC side. But Klinger says Cavium has added hardware-based load balancing and scaling to avoid data bottlenecks.
"This allows the programmer to view the OCTEON as a single-core processor rather than develop complex software running on the cores to attempt to handle the scaling as required in alternative multicore architectures," he says. "Performance scales linearly all the way through the full 32 cores."
It's also important to remember that few networks rely on homogeneous architectures. In fact, multi-vendor environments are the norm in IT circles these days, rather than the exception. So whenever a new standard like 40G/100G Ethernet comes along, the industry benefits from broad interoperability. That's why groups like the Ethernet Alliance waste no time in fostering cross-platform compatibility. The group is planning its first 40G/100G interoperability event this fall in Santa Clara, Calif., most likely before key vendors have developed products that are in full compliance with the standard.
"It's not unusual to see interoperability testing even prior to the ratification of a standard to get early feedback on the specification," says Blaine Kohl, chief marketing officer at the alliance. "We don't know if anyone is fully compliance because to date, there has been no third-party compliance testing. So yes, it's time to get this party standard."
Kohl says a key focus of the initial testing will be implementation of the standard's unique lane-marking system, which allows data to be streamed across multiple media.
"In some cases, the medium is a single lane carrying multiple wavelengths, such as 100GBASE-LR4, while other media are multiple lanes carrying a single wavelength like 100GBASE-SR10," he says.