The microserver is most likely coming to a data center near you in the immediate future, but the question remains: in what capacity?
Talk of massive scale-out infrastructure is all the rage, and the microserver would dovetail nicely with this trend. However, hyperscale infrastructure is geared more toward massive, web-facing operations like Google and Facebook, rather than the typical enterprise, and these organizations seem content to ODM – original design manufacture — their own solutions. So when it comes to the average data center, where and how will microservers have the greatest impact?
According to tech consultant Scott Matteson, microservers will still provide a ready solution for scale-out applications even if they don’t rise to the level of hyperscale. Their small size and low power consumption means they can be used to build highly dense architectures without breaking space limitations or power envelopes. But since they offer moderate computing capabilities, you’ll need a lot of them, which can be difficult to architect due to a lack of design standards between vendor solutions and the need, in most cases, for customized clustering software. And depending on workload requirements, it may turn out that a single large server running multiple virtual instances could prove to be both cheaper and more functional.
Already, though, battle lines are being drawn for the microserver market. Applied Micro Circuits recently took direct aim at none other than Intel in the race to supply processors to device manufacturers. The company’s X-Gene is built on the ARM architecture that already powers most of the world’s smartphones. AMC is boasting of more than $1 million in revenues for the X-Gene and claims it already has a backlog of orders for the remainder of the year. For its part, Intel is working on advanced architectures that combine low-power Atom solutions with higher-order Xeon devices.
Meanwhile, server developers are shifting their attention away from general-purpose machines toward specialized architectures for highly targeted workloads. A case in point is IBM’s new microserver intended for the DOME radio astronomy project in the Netherlands. The device will be part of an integrated computing environment that will handle the exascale data loads coming from the Square Kilometer Array (SKA) designed for deep space observation. Ultimately, the company expects the technology to trickle down to the commercial enterprise market, although that timeline could play out over years, if not decades.
Microservers are also being repurposed for tasks other than traditional workload processing. For instance, Facebook’s new Wedge ToR switch, part of the Open Compute Project, is built on a microserver platform, which makes it easier to deploy in high-density rack configurations. Combined with the FBOSS operating system, the device offers the flexibility to provide customized switching solutions that can be quickly purposed and repurposed to suit a variety of workloads.
The microserver, then, is an innovative solution to a number of key emerging challenges in the data center, but its highly parallel nature makes it unsuitable for many traditional business applications. But it’s fair to say that the microserver will support many of the next-generation functions that the enterprise is looking to deploy – everything from mobile communications to web transaction processing – and will likely become a common facet of the dynamic data environment going forward.
Arthur Cole writes about infrastructure for IT Business Edge. Cole has been covering the high-tech media and computing industries for more than 20 years, having served as editor of TV Technology, Video Technology News, Internet News and Multimedia Weekly. His contributions have appeared in Communications Today and Enterprise Networking Planet and as web content for numerous high-tech clients like TwinStrata, Carpathia and NetMagic.