More

    The 64-Bit ARM and the Return of Specialized Hardware Infrastructure

    Server architecture is poised to undergo some radical changes over the next few years, forcing enterprise managers to think a little bit harder about the data challenges they expect to confront and the best means to handle them.

    This is somewhat counterintuitive to the prevailing wisdom that hardware decisions are rapidly fading into the past and that data requirements are best fulfilled through innovative software designs on the virtual layer and above.  But as technology continues to evolve, it’s becoming clear that a right way and a wrong way still exist when it comes to hardware development and deployment.

    As evidence, I point to the increased interest in ARM processors for the server farm, which by all indications is set to rise to fever pitch in the coming year with the advent of new 64-bit architectures. Dell, for example, recently gave the first peak of its upcoming machine based on new processors from Applied Micro. The device was shown running the Fedora Linux release tied to storage from PMC-Sierra and is expected to be in the hands of testers early next year.

    As well,  HP is on pace to launch a 64-bit ARM device–again, using Applied Micro chips–as part of its Project Moonshot portfolio aimed at hyperscale Web workloads. The goal is to ramp up processing speeds without pushing power envelopes to unsustainable levels, while providing a common architecture with the legions of tablets and smartphones working their way into enterprise infrastructure. In this new world of pooled resources and highly dynamic load management, efficiency trumps performance–at least when it comes to the individual server.

    But it should be noted that ARMs and other low-power architectures like the Intel Atom are not intended for wholesale replacement of traditional server infrastructure. As ZDnet’s Mary Branscombe points out, the ARM works best with high-volume, small-packet traffic typically found in online transactional environments or as single-purpose devices optimized for, say, pulling low-priority data from storage. Indeed, many of them may wind up sharing space with higher-power CPUs where they can take on critical back-end tasks such as networking and I/O management.

    This may be part of the reason why chip designers that have traditionally eschewed the enterprise server market are taking another look. A good example is networking SoC developer Cavium Networks. The company recently began showing virtual simulations of a future 64-bit ARMv8 processor named Thunder running Ubuntu 13.10 workloads, which the company says will be available next year for scale-out applications. Final specs aren’t out yet, but the simulation is available on-line for software developers who want to get a jump on the platform.

    So, far from producing a giant, homogenized commodity hardware infrastructure, it seems virtualization and the cloud is driving increased specialization into the physical layer. Whether it’s ARMs, Flash storage or converged infrastructure, an optimal hardware configuration for the care and feeding of select data environments will still be available. But building these disparate infrastructure sets will be just the first task. The real challenge lies even further ahead, when IT will have to figure out how to push data to the right infrastructure in order to receive optimal results.

    Arthur Cole
    Arthur Cole
    With more than 20 years of experience in technology journalism, Arthur has written on the rise of everything from the first digital video editing platforms to virtualization, advanced cloud architectures and the Internet of Things. He is a regular contributor to IT Business Edge and Enterprise Networking Planet and provides blog posts and other web content to numerous company web sites in the high-tech and data communications industries.

    Get the Free Newsletter!

    Subscribe to Daily Tech Insider for top news, trends, and analysis.

    Latest Articles