A Little Silicon Goes a Long Way Toward Energy Efficiency

    Slide Show

    Earthquakes and the Modern Data Center

    Sometimes little things can contribute quite a lot to the big picture.

    This is, in fact, the basis for pointillist art, but it also applies to the data center where a major chunk of the energy efficiency gains of the past decade is the result of low-power processors in server, storage and networking architectures. And it is also the reason why the ARM architecture has garnered so much attention in the data center these days.

    The question facing most CIOs, though, is not whether to use ARM, but where and how to deploy it.

    In Europe, ARM Ltd. and STMicroelectronics are spearheading a drive to implement microserver and server-on-chip solutions throughout the data center. Called the Euroserver Project, the idea is to shrink the size of server components, which will then lead to a corresponding reduction in energy consumption through changes to memory, I/O, interconnects and software. The group is currently investigating a new 3-D integration scaling technique that maximizes the number of ARM cores in a given space and then matches memory and I/O to the core count.

    This comes at a time when Qualcomm is poised to make a late entry into the ARM server market. The company announced this week that it was joining AMD, Marvell and others in the enterprise space because the capabilities currently making their way into smartphone and tablet solutions are sophisticated enough to handle key server applications, namely, edge devices, file and print servers and certain HTML functions. The company has yet to announce specific products or roadmaps, however.

    Conventional thinking, though, is that while ARMs are adept at the select applications mentioned above, the heavy lifting of advanced applications like database processing, data warehousing and high-end ERP and CRM solutions is best left to traditional processors like the Xeon or even a GPU architecture. But are they?

    According to Kevin Morris of the Electronic Engineering Journal, field programmable gate arrays (FPGAs) offer much greater speed and dramatically lower energy consumption than even today’s advanced GPU platforms, provided you can develop the proper look-up tables (LUTs) and other key features for the algorithm at hand. As Big Data fuels a need for more big cores, power envelopes will only increase even if those cores improve the data/energy ratio. With an FPGA on hand, though, a customized hardware solution can outshine all others, save for a customized ASIC or ASSP, neither of which are reprogrammable. The key, of course, is developing the logic that would make this happen, but as it turns out there have been significant developments on this front as well.


    At the recent Super Computing 2014 conference in New Orleans, Xilinx announced the SDAccel development environment for OpenCL, C and C++ that the company claims offers a 25-fold improvement in performance/watt using FPGAs. The system is part of the company’s SDx platform, which includes libraries, development boards and a CPU/GPU-equivalent development and a run-time engine. In this way, even programmers with little or no experience with FPGAs can optimize workflows and applications on an FPGA platform, with automatic instrumentation insertion and other techniques for supported development targets. Existing CPU/GPU applications can also be migrated to an FPGA using original coding.

    No matter how many layers of abstraction you create, the success or failure of any data initiative relies on the relationship between software and silicon. The enterprise has been working diligently to reduce power consumption in the data center without hampering performance, and many observers have started to complain that most of the significant gains have already been made.

    Greater reliance on ARM for Web transactions and other small-packet workloads and FPGAs for heavy database applications just might prove them wrong, though.

    Arthur Cole writes about infrastructure for IT Business Edge. Cole has been covering the high-tech media and computing industries for more than 20 years, having served as editor of TV Technology, Video Technology News, Internet News and Multimedia Weekly. His contributions have appeared in Communications Today and Enterprise Networking Planet and as web content for numerous high-tech clients like TwinStrata, Carpathia and NetMagic.

    Arthur Cole
    Arthur Cole
    With more than 20 years of experience in technology journalism, Arthur has written on the rise of everything from the first digital video editing platforms to virtualization, advanced cloud architectures and the Internet of Things. He is a regular contributor to IT Business Edge and Enterprise Networking Planet and provides blog posts and other web content to numerous company web sites in the high-tech and data communications industries.

    Latest Articles