ARM and the Dynamic Data Ecosystem

    Slide Show

    Ten Things You Need to Know About Software-Defined Storage

    It should be eminently clear by now that the hyperscale data center is going to be a very different animal than the traditional enterprise facility. For one thing, it will rely on a completely new processor architecture highlighted by the low-power ARM and Atom devices.

    This makes sense considering that massive Google/Facebook-style operations require enormous amounts of processing power, which would blow the IT budget if based on standard x86 architectures. Also, most hyperscale activity involves Web-facing applications, which require the simultaneous processing of massive volumes of small-packet data rather than the heavy number crunching that has typified data environments in the past.

    But how easy will it be to harness all that disparate processing power? Could it be that as the enterprise embraces low-power processing, savings in hardware infrastructure will merely go toward complicated management layers to maintain some semblance of order among disparate devices?

    Not if ARM has anything to say about it. The company recently released an open spec intended to provide the vendor community with a basis for developing wide-scale ARM architectures. The Server Base System Architecture (SBSA) was developed in conjunction with Canonical, Citrix, Microsoft, Red Hat, Dell, HP and other big names as a means to foster the new 64-bit ARMv8 as a serious rival to the extensive installed base of x86 devices in the data center.

    This news came on the heels of AMD announcing its first ARM processor, which it bills as “the industry’s only 64-bit ARM server from a proven server processor company.” The 64-bit device is only available in prototype form at the moment, however, although the company promises sample shipments within the next few months. The former “Seattle” device will be officially named the Opteron A1100 and will be built on a 28 nm process. Initial configurations include four- and eight-core designs with 64 GB of DRAM and 4/8 MB of shared L2/L3 cache. They will also sport configurable dual DDR3 or DDR4 memory channels with built-in ECC that can accommodate up to 1,866 million transfers per second. As well, the entire SoC design consists of 8x PCIe 3.0, along with eight SATA 3 ports and dual 10 GbE ports, as well as up to four SODIMM, UDIMM or RDIMM memory modules.

    The thing to keep in mind in all this, however, is that ARM architectures are not likely to function as mere x86 replacements. Instead, they will probably lay the foundation for an entire new data center running ARM-specific applications. VMware’s Raghu Raghuram, for one, points out that while it is possible to port x86 apps to the ARM, it is a very risky and expensive process, and one that will probably prove unnecessary in the software-defined data center (SDDC) anyway. But since key tools that VMware says are needed for SDDC development, namely the NSX network virtualization and VSAN platforms, are still undergoing early trial deployments, it might be better for the average enterprise to wait a bit longer before contemplating the wholesale replacement of x86.

    Ultimately, however, the data center will have to change if it is to keep up with cloud computing, the Internet of Things and everything else that is leading to what tech writer Simon Bisson describes as “ambient computing.” Although it is convenient to think in terms of ARM vs. x86, the real story here is the way in which the new data framework will process information and the power it will take to fuel an environment in which literally everything we own is constantly feeding data into the infrastructure. Like air, digital technology will be everywhere, and it will need an extremely efficient, highly dynamic processing architecture to handle it all.

    Given all of this, then, it wouldn’t be prudent for the enterprise industry to make a mad dash for ARM-based infrastructure simply because it’s “the next big thing.” Rather, the ARM would better serve as the basis for a longer-term rollover of legacy infrastructure into the dynamic, software-defined data ecosystem.

    This future is not that distant, but its foundations must be laid carefully if it is to provide adequate service over the coming decades.

    Arthur Cole
    Arthur Cole
    With more than 20 years of experience in technology journalism, Arthur has written on the rise of everything from the first digital video editing platforms to virtualization, advanced cloud architectures and the Internet of Things. He is a regular contributor to IT Business Edge and Enterprise Networking Planet and provides blog posts and other web content to numerous company web sites in the high-tech and data communications industries.

    Get the Free Newsletter!

    Subscribe to Daily Tech Insider for top news, trends, and analysis.

    Latest Articles