More

    ARMs, GPUs, CPUs: The Growing Diversity of Data Center Chip Architectures

    Slide Show

    Top 10 Strategic Technology Trends for 2016

    Enterprise data loads have been growing more diverse for some time. These days, organizations are routinely juggling traditional workflows, mobile and cloud-facing applications, high-speed transactional data, and even a smattering of Big Data-style analytics.

    It’s no surprise, then, that the x86 architecture’s hold on enterprise data infrastructure is starting to crack. But while new technologies like the ARM architecture are normally cited as the main challenger to the traditional CPU, the fact is that diverse data requirements are leading to equally diverse chip technologies, ushering in the rather unpleasant specter of multi-processor hardware environment in the not-too-distant future.

    A case in point is the graphics processing unit (GPU), which is not only showing prowess outside of its traditional video game and image processing fields but is in fact taking on some of the toughest assignments that the enterprise has to offer. Google recently open sourced its TensorFlow artificial intelligence engine that relies on GPUs for both the learning aspects of the system and for service delivery, says Wired’s Cade Metz. This is significant in that GPUs would naturally lend themselves to the image-related tasks of AI, such as facial recognition, but Google is aiming at greater efficiency on the execution side as well, which is likely to become more complex as data points and client devices become increasingly diverse and distributed.

    Traditional enterprise vendors are already turning to GPUs as they seek to leverage legacy platforms for hyperscale and high-performance environments, and this is offering chip makers like AMD a wedge into advanced architectures that Intel is hoping to address through more conventional means. AMD has teamed up with HP Enterprise to incorporate the FirePro GPU into the ProLiant Gen9 platform in support of HPC workloads. The FirePro is available with up to 16 GB of DDR5 RAM and can be tuned for single- or double-precision functionality. It also supports leading development environments like OpenCL, OpenACC and OpenMP.

    For its part, Intel is doubling down on the x86 for HPC applications in the form of the Xeon D, which recently saw the latest additions to the D-1500 family. The SoC is a 64-bit, 14 nm device aimed at networking, storage and other aspects of IoT processing, which offers the ability to maintain a single instruction set from the data center core to the edge. Initial versions of the D-1500 include quad-, six- and eight-core configurations with 128 GB of RAM and dual 10 GbE ports, and the company is planning to release 12- and 16-core devices in the coming year.

    Processor Chip

    As for the ARM, it is alive and well in the data center as organizations pursue scale-out, distributed infrastructure in which performance will increasingly be measured against the workload, rather than the processor itself, says AppliedMicro’s John Williams. And as infrastructure and resource management becomes more automated, it won’t matter what instruction set is running as long as the job is being handled in the most efficient, effective manner. For this reason, he says it will be in the enterprise’s best interests to deploy a variety of chip platforms so as to provide as tailored an environment as possible for each and every application that users call up. The end result will be greater performance, higher efficiency and lower costs across an increasingly diverse processing infrastructure.

    The biggest danger in a multi-processor data environment is the rise of chip-level silos across the enterprise. If GPUs can’t talk to CPUs, which can’t talk to ARMs, it will be impossible to harness the vast power that these new platforms bring to the table in a coordinated fashion. The good news is that software evolves faster than hardware, and platforms like TensorFlow are already being engineered to function across multiple processors.

    At the end of the day, the data center is only as effective as its silicon allows it to be, and with data architectures becoming increasingly specialized, it is only natural that they would require unique hardware capabilities. The one-size-fits-all data center was highly effective in the last century, but it will need much more diversity within its core computing capabilities if it hopes to survive in the next.

    Arthur Cole writes about infrastructure for IT Business Edge. Cole has been covering the high-tech media and computing industries for more than 20 years, having served as editor of TV Technology, Video Technology News, Internet News and Multimedia Weekly. His contributions have appeared in Communications Today and Enterprise Networking Planet and as web content for numerous high-tech clients like TwinStrata and Carpathia. Follow Art on Twitter @acole602.

    Arthur Cole
    Arthur Cole
    With more than 20 years of experience in technology journalism, Arthur has written on the rise of everything from the first digital video editing platforms to virtualization, advanced cloud architectures and the Internet of Things. He is a regular contributor to IT Business Edge and Enterprise Networking Planet and provides blog posts and other web content to numerous company web sites in the high-tech and data communications industries.

    Get the Free Newsletter!

    Subscribe to Daily Tech Insider for top news, trends, and analysis.

    Latest Articles