Network Convergence: Too Many Tiers, Not Enough Protocol

Arthur Cole

Enterprise managers are beholden to two masters: increased performance and lower costs.

 

In normal circumstances, these demands would be in constant conflict. However, when it comes to network infrastructure, it seems entirely possible to eliminate redundancy within existing infrastructure and at the same time build a more reliable environment capable of handling increasingly large and complex data loads.

 

The idea of network convergence isn't exactly new, but the tools and techniques that are making it happen are coming at a steady pace now. And a new wrinkle has been added of late, as network architectures begin to account for the dramatic increases in scale that data centers can now achieve through virtualization and the cloud. For many, the only logical means to deal with these new environments without breaking network capital budgets is to flatten out the numerous data tiers that have arisen over the decades.

 

"There are a lot of factors that one should keep in mind when selecting a networking (platform)," said Asaf Somekh, vice president of marketing at Voltaire, now a unit of Mellanox. "Performance, price, power, scalability and reliability are all obvious ones. But as we move into next-generation, virtualized data centers, there are new criteria that IT managers need to evaluate. One of these is network design. Can you provide a flat fabric consisting of one or two tiers of switches? Moving away from an expensive tier network architecture to a flatter, more efficient design enables cost savings and alleviates management headaches."

 

Naturally, Ethernet is the prime candidate for the underlying fabric of a converged network architecture, if for no other reason than it has permeated nearly all areas of data infrastructure both inside and outside the data center. And with the recent development of both 40 Gb and 100 Gb formats, there is no reason why an enterprise couldn't go all-Ethernet with little trouble -- and not just because of the increased bandwidth.


 

"The 40G and 100G standard uses many of the building blocks that were created in the IEEE 802.3ae (10GbE) standard, but it added the ability to perform lane marking, which permits data to be streamed across different media and to easily be re-assembled at the receiver," says Blaine Kohl, chief marketing officer for the Ethernet Alliance. "In some cases, the media is a single lane carrying multiple wavelengths, 100GBASE-LR4, while other media are multiple lanes carrying a single wavelength, like 100GBASE-SR10."

 

A quick scan of the Ethernet market shows broad acceptance as the transition to flatter networks unfolds. Infonetics says the Ethernet switch market (both carrier and enterprise) topped $18.5 billion in 2010, nearly a 30 percent increase. Some of that rise can be attributed to the resumption of capital spending following a recession-driven pullback in 2009, but it nonetheless represents continued confidence that Ethernet will be the protocol of choice for much of the worldwide data infrastructure. Dell'Oro Group, meanwhile, reports that enterprise Ethernet port shipments were up by a third in the latter half of 2010, driven primarily by 10 GbE deployments.

 

Still, there are those who argue that placing all of your IT eggs in the Ethernet basket will not necessarily produce an optimal networking environment. In fact, the very notion of a single, all-encompassing network architecture strikes some as a less-than-ideal approach to increased efficiency and performance, particularly as the various protocols in play continue to one-up each other on a regular basis.

 

"All of these protocols leapfrog each other," says Shaun Walsh, vice president of corporate marketing at Emulex. "Every time someone has a new data rate, they say 'that should be new fabric.' But will everything collapse onto single wire? That hasn't happened in the history of the computer."

 

The simple fact is that not all data is the same, so it only stands to reason that there should be multiple networking options on a common physical layer -- not just Fibre Channel over Ethernet, but, say, 16 G native Fibre Channel over a 40 G or 100 G wire.

 

"Fibre Channel applies to the structured part of the world, like block applications," Walsh says. "But more and more data is unstructured -- video and Powerpoints at 30 Mb each. That portion is growing much faster than Fibre Channel, which, after all, doesn't go over the cloud, only the endpoints. As we move toward more cloud work and outsourced applications, we see the Ethernet growing and used more for storage."

 

Indeed, it seems that the former war for network dominance among the major protocols has given way to an armistice based on the realization that there is more to be gained by cooperation than conflict.

 

"InfiniBand and Ethernet can easily co-exist in the data center," says Voltaire's Somekh. "InfiniBand works with legacy systems and doesn't require a forklift approach to re-architect the data center. Some of our InfiniBand switches come with built-in Ethernet ports to easily connect to Ethernet and storage. We stand behind both InfiniBand and Ethernet as we provide scale-out data center fabric solutions built on both technologies."

 

Regardless of protocol, however, elimination of as many network tiers as possible is still a worthy goal. Even an end-to-end 10 GbE network will only prove marginally effective if data still has to contend with Spanning Tree and other architectures designed for the static enterprise environments of the past. That's why you see many network platforms investing heavily in more fluid techniques like TRILL (Transparent Interconnection of Lots of Links) and VEPA (Virtual Ethernet Port Aggregation), not to mention the kinds of non-blocking architectures that allow servers and switches to by-pass the aggregation layer entirely by connecting directly to the core network.

 

As with all things enterprise, the ideal approach is one that allows you to adopt a forward-looking development program while building on today's infrastructure. Increasing bandwidth is a key enabling factor for a more streamlined, more powerful network architecture, but it is not the only tool in the shed.

 

A truly optimal environment will continue to require a delicate minuet between underlying physical infrastructure, network and storage architecture and management/automation technology.



More from Our Network

Add Comment      Leave a comment on this blog post

Post a comment

 

 

 

 


(Maximum characters: 1200). You have 1200 characters left.

 

 

Subscribe to our Newsletters

Sign up now and get the best business technology insights direct to your inbox.


 

Resource centers

Business Intelligence

Business performance information for strategic and operational decision-making

SOA

SOA uses interoperable services grouped around business processes to ease data integration

Data Warehousing

Data warehousing helps companies make sense of their operational data