Is there something eerily familiar about the state of data center networking these days?
Time was, IT environments were built around the mainframe -- the massive, room-sized hunks of iron that fed data to the various terminals spread throughout the enterprise. Problem was, these devices were expensive, difficult to manage and eventually lacked the flexibility demanded of fast-moving data infrastructures. You can still find some of them today, but most organizations have adopted a distributed computing architecture based on small, but powerful servers.
But that distribution of resources, plus the desire to pool those resources to harness mainframe-like performance when needed, is placing tremendous pressure on enterprise networks. So in response, major network vendors are proposing high-speed, scalable network fabrics built around massive core switches like Cisco's Nexus 7000, 3Com's (soon to be HP's) S12500 or Force10's ExaScale E-Series.
See the connection? As far as infrastructure goes, enterprises will have shed those massively complex mainframes only to shift that burden to a massively complex core switch.
The argument now centers on how to manage this gargantuan switching capability, or more specifically, the best way to implement the kind of virtual switch (VS) technology that allows physical components to be repurposed and reconfigured quickly and easily. Extreme Networks, which provides its own core technology in the form of the BlackDiamond 8800 system, just pitched the radical idea of keeping the management of VSs on the traditional switch management platform, rather than on the server platform as proposed by Cisco. Extreme says this will go a long way toward overcoming vendor lock-in and improving server performance, although it's hard to square that with the company's claim that it wants to remove the management silos that exist around systems and network-management teams.
But regardless of how the management is accomplished, the core switch represents a substantial concentration of enterprise infrastructure at a time when all other aspects of the data center are moving toward distributed architectures.
Who am I to question the wisdom of the top networking gurus of the IT industry, but it seems to humble ol' me that many of the same issues that plagued the collection of processing power in the mainframe would apply to the collection of network traffic in the core -- namely, cost, complexity and that it represents a single point of failure.
The Internet, like the telephone network itself, was originally designed so that even if one central office went down, information could always be rerouted elsewhere and still find its destination, although maybe a bit more slowly.
For enterprises then, the question is: If it was a bad idea to put all of your server eggs in the mainframe basket, why would you want to do the same thing to your network?