Advances in artificial intelligence (AI) are coming fast and furious these days. But IT organizations would be well-advised to play both the AI long and short game. The short game today is mainly focused on multiple types of so-called deep learning engines that need to be made available as a cloud service.https://o1.qnsr.com/log/p.gif?;n=203;c=204663295;s=11915;x=7936;f=201904081034270;u=j;z=TIMESTAMP;a=20410779;e=iToday, most of those AI platforms are powered by a mix of traditional RISC processors such as the Power processors used by IBM to drive Watson or graphical processor units (GPUs). Intel is not going to sit idly by while what amounts to a new class of application workloads drive demand for alternative processor technologies. Intel first moved to acquire Nervana, a provider of an ASIC processor optimized to run AI applications. Since then, Intel has now made it clear that it will employ the core software technology developed by Nervana to not only power a new generation of processors designed to counter RISC processors and GPUs in the cloud, but also infuse that core technology across a broad range of Intel Xeon class processors.
Over the long term, that approach is likely to have the more profound impact on every aspect of IT. Instead of moving data into AI platforms residing in the cloud, Barry Davis, Intel general manager for high performance computing (HPC) and networks, says Intel is essentially making a distributed computing case for bring AI technologies to where the data already resides. That’s not to say that Intel is not going to compete for AI workloads in the cloud. But Davis says AI software is essentially at its core just another type of workload that needs to be distributed. In much the way that it’s always better to bring the code to the data, the same theory applies to AI workloads.
“AI workloads depend on scale-out architectures,” says Davis. “Everything ties back to parallel processing.”
To drive those workloads, Intel is pursuing a dual strategy. Intel plans to offer a family of processors dubbed Lake Crest in the first half of 2017 that is optimized for deep learning applications that will run mainly in the cloud. A little longer term, Intel will also provide a Knights Crest family of Xeon processors infused with Nervana technology.
Intel is also pouring resources into a range of HPC technologies that will ultimately provide the foundation on which Intel will first optimize processors for machine learning algorithms. As Intel gains in proficiency with those technologies at the high end of the computing spectrum, those advances will trickle down into more general-purpose processor technologies. In the meantime, at the recent Supercomputing 2016 conference, Intel previewed a next generation of Xeon-class processors, codenamed Skylake, that supports Intel Advanced Vector Instructions, floating point calculations, encryption algorithms, and an integrated Intel Omni-Path Architecture for high-speed networking on the same multicore processor family.
To a certain degree, Intel is obviously playing catch-up when it comes to AI. But it’s also apparent that Intel is staying true to its historic core computing strategy by betting that when it comes to AI, a long game based on distributed computing will continue to carry the day.