For a while, it seemed like the importance of hardware in the data center would diminish to the point that it became an afterthought in most IT departments. As both virtualization and the cloud extended their reach over the data environment, tasks like provisioning, resource configuration and ongoing management and operations were quickly moving to the software level.
Any old hardware platform will do, provided it supports the requisite virtual infrastructure and can hit reasonable price points for both the capital and operational budgets.
But that paradigm is starting to break down, at least on the processor level, as new devices hitting the channel show a diverse range of capabilities suited to individual data environments and loads.
Intel, for example, is set to release a new set of Atom chips that the company says are targeted at specific applications like storage, networking and the new class of micro servers. For storage environments, the company offers the Briarwood processor, which supports up to 40 lanes of PCIe 2.0 I/O capacity. Later this year, Intel will issue the 64-bit Avoton that features the 22 nm Silverton process, offering low power consumption and broad scale-out capabilities, and the Rangeley, designed for mid-level routers, switches and security appliances.
Intel’s main rival in the low-power sphere is ARM, which already runs the show in the white-hot smartphone arena. In the enterprise, ARM seems to be throwing its lot in with the growing open source movement, which is seen by many as a key stepping stone to a truly dynamic, broadly scalable cloud ecosystem. With Facebook, Google and other web-facing firms customizing their own hardware infrastructure around open source, and facing a constant battle to keep costs down, an ARM-based platform that supports, say, Linux, would look very attractive.
At the same time, new generations of graphical processing units (GPUs) are moving into more general-purpose application environments. Nvidia is moving quickly into the enterprise sphere with its GRID platform, which recently saw the addition of the new Visual Computing Appliance that packs 16 GPUs into a small appliance footprint as a means to boost the performance of applications like Adobe and Autodesk. The next step is the addition of GRID support from top server vendors like HP and IBM, as well as virtualization developers like VMware and Microsoft. Ultimately, the company hopes to play a major role in Big Data analytics, search and even desktop virtualization.
None of this is to suggest that traditional high-end processors don’t have a future in the virtual/cloud universe. In fact, Fujitsu recently rolled outs its latest Sparc machine, the M10 enterprise server, that boasts a 16-core Sparc64 architecture ranging from a 1U, single-chip model to a full rack of 4U machines featuring 64 chips and more than 76 TB of storage.
Even the trusty Xeon is slated for a series of upgrades, with the new E5 and E7 models seeing improved memory, stability and reliability features under the 22 nm Ivy Bridge architecture. IBM says the upgrades make them indispensable for high performance of mission-critical workloads. At the same time, Intel is working on new rack and density designs to improve serviceability in high-density configurations while at the same time increasing the ability to share power and cooling resources.
Hardware, then, still has a vital role to play as data infrastructure evolves into a more utility-style business resource. Whether it’s in the cloud, the hypervisor or the local server, at some point data has to hit silicon. How the processor is designed, and the type of support environment that surrounds it, will have a lot to do with how well and how fast data can be turned into valuable knowledge.