Chip Makers Vie for Machine-Learning Dominance

    Slide Show

    Building High-Growth IT: 5 Things to Know Now

    The enterprise will most certainly become more cloudy and more intelligent as the decade unfolds, but of the two, the case can be made that the intelligent technologies under development today will have the more far-reaching impact over data operations.

    Many of these artificially intelligent machine-learning capabilities are being programmed into silicon, placing them at the foundation of all the virtual, abstract data architectures that follow. And this is also leading to an upheaval of sorts for the chip industry as demand for greater system autonomy and self-service functionality shifts the focus away from raw power to more nuanced data-handling and coordinated processing functions.

    AMD, for one, is finding that a renewed focus on deep learning and parallel processing is one of the keys to survival in an increasingly competitive industry. The company recently teamed up with Google to implement its Radeon GPUs in support of neural networking and other advanced constructs to drive performance and streamline operations across hyperscale infrastructure. Starting in 2017, Google plans to launch the FirePro S9300 x2 GPU to support the Google Compute Engine and Google Cloud Machine Learning services, according to Forbes. AMD also recently signed a similar deal with Chinese ecommerce leader Alibaba.

    But just as AMD had to butt heads with market-leader Intel in the CPU space, so too must it now go head-to-head with Nvidia, which is emerging as a leader in enterprise GPU markets. Digital Trends says the company recently linked up with IBM to develop deep learning capabilities between the Power8 processor and a range of Nvidia GPUs through common usage of the NVLink interconnect platform. The system enables data speeds of up to 80 GBps, which is more than double what today’s x86 servers enjoy with PCI Express. IBM is looking to implement the technology for its PowerAI platform that unites several deep learning frameworks like Caffe and OpenBLAS under a single Ubuntu package.

    Meanwhile, other intelligent platforms are emerging on the field-programmable gate array (FPGA), which provides for more adaptable hardware constructs due to their ability to be re-configured after deployment. Enterprise Tech reports that chip-designer Xilinx recently provided HPC cloud provider Nimbix with a range of analytics, machine-learning and rich media capabilities under a “reconfigurable acceleration stack” that streamlines programming for compute-intensive workloads. The setup will allow users to access a newly reconfigured compiler that supports various OpenCL frameworks to enable C and C++ kernels that span FPGA, CPU and GPUs working in tandem. At the same time, a new set of libraries brings in deep neural network support, as well as a SQL-based compute kernel.

    Meanwhile, Infiniband is emerging as a key element in intelligent systems development as well. Mellanox is set to begin shipping a new architecture that pushes throughput to 200 Gbps with an eye toward accelerating machine learning and other HPC functions. As noted by Computerworld, the HDR Infiniband platform will debut on three systems early next year: the ConnectX-6 adapter, the Quantum switch and the LinkX transceiver. In this way, the system can be implemented across any combination of CPUs, including Power and ARM devices, with up to 40 ports of 200 Gb connectivity for a total switching capacity of 16 Tbps.

    While a true “thinking” computing is not in the cards any time soon, these new high-powered architectures will produce a marked shift in data architectures toward greater adaptability and higher levels of autonomy than exists today. And as time goes by, they will actually become more adept at handling routine matters, upending the traditional lifecycle that leads to technological obsolescence.

    And ultimately, it will finally bring about the notion of the data environment as a singular entity within the enterprise organizational structure – a far cry from today’s collection of systems and platforms that requires exorbitant amounts of hands-on management.

    Arthur Cole writes about infrastructure for IT Business Edge. Cole has been covering the high-tech media and computing industries for more than 20 years, having served as editor of TV Technology, Video Technology News, Internet News and Multimedia Weekly. His contributions have appeared in Communications Today and Enterprise Networking Planet and as web content for numerous high-tech clients like TwinStrata and Carpathia. Follow Art on Twitter @acole602.

    Arthur Cole
    Arthur Cole
    With more than 20 years of experience in technology journalism, Arthur has written on the rise of everything from the first digital video editing platforms to virtualization, advanced cloud architectures and the Internet of Things. He is a regular contributor to IT Business Edge and Enterprise Networking Planet and provides blog posts and other web content to numerous company web sites in the high-tech and data communications industries.

    Get the Free Newsletter!

    Subscribe to Daily Tech Insider for top news, trends, and analysis.

    Latest Articles