With all the talk about virtualization and the cloud lately, it’s easy to forget that good old-fashioned hardware still matters in the data center.
However, for the basic unit of the data center, the server, 2012 has not been kind, at least for the people who make and sell them. While virtualization initially gave server sales a bump, longer-term trends are starting to kick in as enterprises fulfill long-held promises of doing more with less. According to IDC, factory revenue for the second quarter fell nearly 5 percent to $12.6 billion, the third quarter in a row with declining sales. The drop was particularly acute in midrange systems, which declined 11 percent, while high-end systems fell 7.6 percent. Volume servers had the easiest quarter, although they still saw a 2.5 percent slide. IDC expects demand to pick up in the second half of the year as new designs roll off the assembly line.
Part of that action will come on the high end — the very high end. IBM recently unveiled the new zEnterprise EC 12 mainframe, the result of a $1 billion development effort aimed at tackling Big Data analytics and cloud-ready workloads. The system boasts a 30 percent improvement in processing power coupled with a Flash memory system designed to keep data flowing at extremely high volumes. Baseline models will run a cool $250,000, although full enterprise systems will likely cost a bit more. To its advantage, however, the new design won’t require any architectural changes to data centers housing existing z machines.
On the low end, we have a pair of new ThinkServers from Lenovo. The RD330 and RD430 are aimed at SMB deployments and can be custom-fitted with Windows images and BIOS settings right out of the box. The devices sport Xeon E5-2400 processors and support up to 192 GB of RAM spread across 12 DIMM slots. The main difference between the two is storage capacity, with the 430 providing up to 24 TB in a 2 RU configuration at a starting price of $1,499 and the single RU 330 topping out at 8 TB for $1,099.
Probably the most significant way the server can contribute to data operations, besides actually processing the data, is power consumption. To that end, the U.S. Environmental Protection Agency is prepping the release of the Energy Star 2.0 spec for servers, which is expected to be more encompassing than the 3-year-old 1.0 version. A key advancement is the inclusion of blade server metrics that take into consideration the high degree of flexibility and scalability in deployment configurations. As well, graphics processors are covered now that they reside in many high-speed machines.
To be sure, as the data center becomes more cloudy, so too does the enterprise’s connection to hardware. Ultimately, the all-cloud data center as advocated by some would have IT concerned merely with application performance and little else. Keeping all the lights blinking in the racks will be someone else’s problem.
But the fact remains that all data comes in contact with hardware somewhere on the cloud. So in that regard it makes sense to keep up to date on what the latest machines can and cannot do — if only to ensure that your data environments are supported by the strongest pillars available.