Intel Outlines Machine Learning Ambitions

    Slide Show

    5 Tips to Successfully Plan for IT Modernization

    One of the less appreciated aspects of advances in machine learning algorithms is how much they depend on the raw processing horsepower that has increased in the past few years by several orders of magnitude. While many developers are now using these algorithms to build advanced applications, not many of them have fully taken into account Intel’s ambitions when it comes to machine learning algorithms.

    As part of the formal launch of Intel Xeon Phi processors at an ISC High Performance 2016 conference today, Intel made it clear that it would be embedding a broad range of types and classes of machine learning algorithms within its family of processors. Charlie Wuischpard, vice president and general manager for the High Performance Computing (HPC) Platforms Group, says the basic idea is to improve the performance of applications that make extensive use of machine learning algorithms that almost by definition consume massive amounts of compute resources.

    The Intel Xeon Phi family is essentially intended to function as a complementary set of co-processors to general-purpose Intel Xeon processors. Rather than depending on PCIe or dedicated graphics processor units (GPUs), Intel envisions developers making use of an Intel Scalable System Framework (Intel SSF). Intel says Intel Phi processors make use of 16GB of high-bandwidth memory to deliver up to 500 GB/s of sustained memory bandwidth for memory-bound workloads, as well as a dual-port Intel Omni-Path Architecture (Intel OPA) to improve the performance of parallel applications.

    Applications that depend on machine learning algorithms clearly represent a major new opportunity for developers and the IT organizations that will ultimately run those applications. The one thing those developers should closely monitor is the machine learning algorithms that Intel plans to embed in its processors. Before they realize it, the equivalent of the machine learning algorithm a developer just spent years optimizing might soon be generally available to all, courtesy of Intel.

    There’s no doubt that machine learning algorithms are about to transform almost every aspect of IT. But as is often the case, today’s major advance in software more often than not winds up becoming an embedded feature of a processor as soon as it’s proven that the use cases for that software are broadly applicable.

    Mike Vizard
    Mike Vizard
    Michael Vizard is a seasoned IT journalist, with nearly 30 years of experience writing and editing about enterprise IT issues. He is a contributor to publications including Programmableweb, IT Business Edge, CIOinsight and UBM Tech. He formerly was editorial director for Ziff-Davis Enterprise, where he launched the company’s custom content division, and has also served as editor in chief for CRN and InfoWorld. He also has held editorial positions at PC Week, Computerworld and Digital Review.

    Latest Articles