To ARM or not to ARM, is that really the question?
That seems to be on the minds of many enterprise executives these days. And in truth, it is not an idle musing. The x86 architecture has served the data center so well over the decades that shifting to another processor has the potential to be enormously disruptive. So the real question is whether user requirements and the nature of the data load itself is likely to change so much that a new processor is in order.
According to tech journalist Timothy Prickett Morgan at EnterpriseTech, the game-changer at play here is scale, or more precisely, hyperscale. When you are talking about data capacity in the petabytes and high volumes of small packet data streams flooding server and network infrastructure, the need for large numbers of low-cost, low-power processors is obvious. Sure, you could accomplish the same thing with a high-end Xeon, but the cost would be astronomical. Hyperscale also goes hand-in-hand with hyper-dense, and for that you need a processor that can churn bits without pushing the heat envelope.
The other thing the ARM architecture has going for it is speed. With new 64-bit architectures coming out from AMD and others, the enterprise finally has a platform capable of handling modern production loads. And the speed factor is on the way up. ARM Holdings confirmed recently that it is close to releasing the new CoreLink CNN-508 design that sports no less than 32 cores and provides 1.6 TBps bandwidth. Aimed at high-performance networking applications, the device augments the platform’s block-style approach to integrated data communications with techniques like transport layer distribution, modular crosspoints and the ability to run at CPU clock speed.
And this isn’t likely to be the last we’ll see from ARM either. The company is preparing to break ground on a new CPU design center in Taiwan’s Hsinchu Science Park that will focus extensively on tailoring the new Cortex-M processor for embedded applications and the Internet of Things (IoT). This may have only a marginal impact on the data center, but think about it: How much smoother will the exchange of data, particularly Big Data that is the hallmark of the IoT, be if the same basic processor architecture powers both the high-end platforms in the enterprise and the legions of sensors and devices that populate the wider data universe?
At the same time, the army of ARM developers is quickly rolling out new enterprise-class solutions. Cavium, for example, recently teamed up with Oracle to optimize Java SE8 for the ThunderX processor built on the ARMv8 architecture. Part of the deal involves the development of new multicore stability features for Java, which should make it easier for developers to tailor applications toward Big Data and IoT applications. At the same time, the ThunderX processor features key high-end capabilities like integrated hardware acceleration, virtual I/O support and scalable Ethernet capabilities, all designed to meet stringent enterprise performance metrics.
It is clear, then, that the ARM is more adept at the large volume, small packet loads that accompany Big Data, Web-based transactional environments and the IoT. But while this market may be on the rise, the installed base of traditional database and business productivity applications is still the primary focus of the enterprise – and for these you’ll need high-end, multicore processors like the Xeon.
If anything, the rise of the ARM processor indicates the coming bifurcation of the enterprise data environment – the old and the new – which, for a while at least, will most likely call for separate, but nonetheless integrated, classes of infrastructure.
The real challenge, then, is not to replace x86 with ARM, but to devise innovative ways for the two to work together.