For all the data infrastructure changes that are taking place on the software level, some people may conclude that there is nothing more to see in hardware. But while it’s true that virtualization, the cloud and software-defined technologies are giving rise to legions of commodity based hardware in the data center, there is still much to debate when it comes to the make-up and configuration of hardware infrastructure.https://o1.qnsr.com/log/p.gif?;n=203;c=204663295;s=11915;x=7936;f=201904081034270;u=j;z=TIMESTAMP;a=20410779;e=i
Divergent workflows that characterize modern data environments are driving the latest trends in hardware. Where once there were primarily database applications and enterprise workflow processing, today’s ecosystem consists of high-speed Web transactions, rich media graphics and video, voice communications, and a host of other functions. For a while, it seemed like software could cover these differences using standard commodity hardware, but in reality, optimized hardware and software configurations for key data sets stand the best chance of improving performance and lowering power consumption.
A case in point is the microserver. Smaller than a tower, bigger than a rack model, the microserver is highly customizable, easily deployed and hits a price point that makes it ideal for small or midsize offices. To date, these appliance-like devices haven’t had the chops to handle enterprise-class workloads, but that is changing as new models like HP’s MicroServer Gen8 hit the channel. The HP system not only sports new Intel Core i3 processors, but it also provides up to 16GB of memory as well as two GbE ports, four external USB ports (two 3.0 and two 2.0), plus an internal 2.0 port and a microSDHC slot for boot media. There is an additional port for out-of-band management functions that, along with a companion 8-port switch for network aggregation, provides what HP hopes will be a powerful but low-cost solution for advanced virtual and cloud environments.
The rise of microservers, in fact, is already leading to changes on the component and even silicon levels as designers seek to, in the words of Intel’s Raejeanne Skillern, “optimize infrastructure for targeted applications and workloads.” The company’s new Avoton CPU is the latest step in this direction. The eight-core SoC to be released as the Atom C2000 later this year features built-in Ethernet, USB 2.0, SATA and PCIe Gen2 controllers, as well as a cryptographic accelerator for improved security processing. The aim is to provide a common platform for both microservers and new component-level rack-server configurations that will ultimately allow the enterprise and cloud providers to mix-and-match various solutions in support of optimized workload support. Expect the first C2000 iterations to show up in new SeaMicro platforms in early 2014.
Of course, where Intel treads, so does AMD. The company is set to release its own set of microserver solutions based on the ARM architecture starting later this year. Dubbed “Seattle,” the design is built around the Cortex A57 processor that ARM Holdings issued in an effort to expand into the enterprise server market. The device will feature eight- and 16-core options, each sporting 128MB of DRAM, and will ultimately replace the company’s Opteron X-series by providing a possible four-fold increase in performance. The Seattle chip is expected to be followed by the “Berlin” and “Warsaw” chips that provide increased densities, lower power consumption and greater optimization for enterprise-class and cloud-ready workloads.
The notion of hardware/software integration and optimization is playing out across a range of data center infrastructure platforms, most notably in the new software-defined networking platforms from Cisco and others. The question is whether these designs truly offer added value over commodity-based open source approaches or are they merely attempts by old-line vendors to maintain long-standing business models. The answer will shape data center infrastructure development for at least a generation.