IBM is betting that increased demand for applications infused with artificial intelligence (AI) is about to drive a spike in demand for Power Systems Servers. IBM today unveiled a new series of servers based on next-generation POWER9 processors that are optimized for deep learning algorithms that are employed to drive some of the latest AI applications.
Sumit Gupta, vice president of high performance computing (HPC), AI and machine learning for IBM, says the AC922 Power Systems are also designed from the ground up to move data 10 times faster than an x86 server because they incorporate technologies such as PCI-Express 4.0 and OpenCAPI as well as the next-generation NVlink developed by NVIDIA, which makes it possible to incorporate more graphic processor units (GPUs) alongside POWER9 processors in the AC922 Power Systems.
IBM, adds Gupta, is among the server vendors to incorporate PCI-Express 4.0 to provide data transfer rates as high as 16 GT/s bit rate.
“It can move more than twice as much data,” says Gupta.
Gupta says that capability is critical when it comes to training AI applications that require access to massive amounts of data to learn how to optimize a process.
Longer term, Gupta says, IT organizations should also expect to see IBM employ the same architecture to incorporate other types of accelerators and memory technologies alongside POWER9 processors, including floating point gate arrays (FPGAs).
It’s too early to say how many AI applications will be driven by deep learning algorithms versus more garden-variety machine learning algorithms. Most deep learning algorithms will be used to drive AI applications that take the form of a neural network capable of interacting with digital assistants. But regardless of the type of algorithm employed, it’s already clear that just about every application imaginable is about to get infused with one or the other.