Servers: A Tale of Two Technologies


"It was the best of times, it was the worst of times..."

Dickens was describing London and Paris during the French Revolution. But in today's world, it is an apt description of the IT industry during the virtual revolution.

For the worst of times, we need look no further than the server industry, which reported another disastrous quarter earlier this week. According to IDC, worldwide shipments dropped some 26.5 percent year-over-year in the first quarter of 2009, with all of the major vendors showing double-digit revenue drops. Overall, the industry shipped only 1.49 million units, the largest decline in five years, with revenues down nearly a quarter to $9.9 billion.

The source of all this woe is the one-two punch of the recession and virtualization, which dampens the demand for new hardware through higher utilization of existing machines. While this may be good for capital budgets, as well as the environment, it's proving to be a real burden for the server industry, which had long counted on a steady refresh rate to keep its coffers full. The decline was most keenly felt in x86 devices.

IDC is also reporting that the picture seems to be the same for the second quarter so far, although they are predicting a tepid rebound by the fourth.

To their credit, many of the top server vendors are not trying to push back the tide but are actively embracing virtualization and other advanced technologies designed to produce more efficient hardware platforms. IBM, for instance, is gearing up for a new server line that takes advantage of Intel's forthcoming Nehalem-EX architecture that features up to 64 cores across eight processors. Although the system is likely to be expensive, it could do the job of multiple blade servers through its ability to handle up to 128 individual threads. The chip itself also provides 16 memory slots per socket and four QuickPath interconnect links for processing large amounts of data in tandem.

Now for the best of times. All of this virtual and multicore activity is clearly a boon to the networking side of the house, particularly wide-band solutions like 10 GbE. Dell'Oro Group reports that the 10 GbE market rebounded in the first quarter, following a decline in the fourth quarter of 2008. The company did not release any numbers from its Network Adapters Quarterly Report, although it did say that Intel is once again the new leader in adapter card revenue and port shipments, while Broadcom retained the spot as leader in silicon controllers.

This all makes perfect sense, of course, because as more and more data starts to run through fewer and fewer hardware devices, the focus of data center performance shifts from raw processing power to network agility and speed. Going forward, as cloud technologies allow enterprises to shift resources on a global scale, the question will no longer be "Do I have enough power to handle all this data?", but rather "How can I get this data quickly to my various end-points?"

And in this vein, there doesn't seem to be anyone interested in slowing things down. Mellanox, for example, just unveiled a 6-port, multiple-protocol 10 GbE physical layer that lays the groundwork for a new generation of high-density, low-power switches and pass-through devices. The PhyX supports all 10 Gigabit Ethernet physical layer functions and can be field-upgraded to FCoE with 2, 4, and 8 Gbps Fibre Channel gateway service without hardware modifications.

With such precipitous changes in data center hardware buying patterns, many wonder if things will ever get back to normal. While sales and revenue figures have fluctuated over the years, the hard news this time is that these changes look permanent. Once the recession is over, server sales should pick up, but they will be nowhere near previous numbers because those low utilization rates are gone forever.

The new normal will be relatively low server activity and increasingly fast networks as enterprises position themselves for the cloudy/virtual decade to come.