Data infrastructure is getting larger and more distributed. Isn’t it ironic, then, that hardware components are getting smaller and more specialized?
Nowhere is this more prevalent than in the server farm. While many organizations still employ legions of high-power servers capable of handling large data volumes or enabling substantial partitioning to accommodate multiple loads, blade and micro server architectures are quickly taking over the large hyperscale facilities that are providing direct support to cloud and mobile operations.
According to Research and Markets, the micro server market is poised to grow more than 63 percent per year between now and 2016. Part of this is due to the mad scramble to ramp up infrastructure to handle the growing tide of structured and unstructured data, but another key factor is the need for massive scale-out architectures specifically designed for the large-volume, small-packet traffic generated by web-facing and social media applications. Much of this architecture is expected to consist of miniaturized, modular components that can be stacked and pooled like so many plastic building blocks.
This growth is already making its presence known in the IC market as well. MarketsandMarkets recently released a global survey of micro server chip demand that points to a $3.1 billion market by 2018. In 2012, the company estimated that micro servers had penetrated only 2.3 percent of the overall server market. That figure is expected to grow to 28 percent by 2018, divided primarily between Intel Atom and low-power Xeon models, AMD’s new ARM solutions, and various other ARM processors from Calxeda, Applied Micro and others.
None of this, however, should imply that micro architectures are the future and that macro solutions are headed for the scrap pile. As ZDNet’s Conner Forrest pointed out recently, most of the top applications for micro servers involve cloud computing, web-based transactional applications and other functions that require massive parallel processing of individual data streams rather than great pools of resources to handle single, large loads. In Forrest’s list, the odd duck seems to be Livestream, which is using micro servers to deliver live video streaming events around the world – a change that provides lower operating costs, but at the expense of lower quality service.
Micro servers are also adept at satisfying scalability’s twin requirement in the data center: increased density. Data environments can only scale to the limit that their facility’s footprint allows. After that, you must either add more space or tie into external infrastructure. Platforms like Supermicro’s MicroBlade provide up to 112 Intel Atom C2000 processing nodes within a 6U configuration, which equates to 784 independent servers per rack. The device is paired with the MicroBlade SDN switch that accommodates four Ethernet modules and a control plane that supports two 40 Gbps QSFP or eight 10 Gbps SFP+ uplinks, plus 56 2.5 Gbps downlinks per module, enabling broad connectivity without drowning the rack in wires. And the architecture is likely to see twice as many nodes before long with the advent of more powerful Atom devices.
Micro servers, then, are clearly the new architecture taking hold in the data center, but with data loads and applications diverging in so many different directions, they are not likely to become the one and only architecture any time soon.
Enterprise executives will have to get used to the idea that, when it comes to hardware, one size does not fit all anymore.