More

    NVIDIA Makes Case for Tighter Integration Across GPUs, DPUs, and CPUs

    NVIDIA today at its online GPU Technology Conference (GTC) for 2021 launched a series of initiatives that promise to make it much less expensive to run artificial intelligence (AI) and other types of data-intensive workloads.

    As part of that effort NVIDIA announced a series of collaborations that combine NVIDIA GPUs and software with Arm-based CPUs that will enable developers to create applications that dynamically leverage both classes of processors using compilers, libraries, and tools provided by NVIDIA beginning in the third quarter. IT vendors and cloud services providers that have committed to employ that capability include Amazon Web Services (AWS), MediaTek, and Fujitsu. NVIDIA is in the process of trying to acquire Arm for $40 billion.

    The company today also unveiled the next iteration of a NVIDIA DGX SuperPOD platform. It now includes support for NVIDIA BlueField-2 data processing units (DPUs), a class of processors from NVIDIA that offloads the processing of networking, security, and storage tasks from CPUs. NVIDIA also added NVIDIA Base Command, a console through which either IT teams can securely access and operate DGX SuperPOD platforms.

    NVIDIA also unveiled the NVIDIA BlueField-3, a next-generation DPU capable of delivering the equivalent of the processing power of 300 CPUs. The company also showcased Morpheus, a new framework that makes it simpler for security vendors to build faster virtual appliances on top of Bluefield DPUs.

    Read next: How AI Will Be Pushed to the Very Edge

    NVIDIA’s GPU Additions

    NVIDIA has also extended its lineup of GPUs that are certified to run VMware vSphere to include two lower cost A10 and A30 platforms that are more affordable. The goal is to make it less expensive to acquire systems capable of running data-intensive workloads running on VMware to a GPU-based platform, says Manuvir Das, head of enterprise computing for NVIDIA. Atos, Dell Technologies, GIGABYTE, H3C, Inspur, Lenovo, QCT and Supermicro have all committed to providing servers based on this architecture.

    Ultimately, Das says the goal is to make it possible for applications to seamlessly employ GPUs and Arm-based processors to run AI and other classes of data-intensive workloads as required.

    “The way AI will proliferate in the enterprise will be by people not having to know the hardware,” says Das.  

    Read next: NVIDIA and ARM: A Potential Game Changer

    NVIDIA Shows Off Grace

    NVIDIA also announced its first data center CPU based on the Arm processor, which is due out in early 2023. Dubbed Grace, the company claims the DPU will deliver 10x the performance of today’s fastest servers available today using fourth-generation NVIDIA NVLink interconnect technology, which provides a record 900 GB/s connection between Grace and NVIDIA GPUs to enable 30x higher aggregate bandwidth than the current fastest servers available today. 

    Grace will also utilize an innovative LPDDR5x memory subsystem that will deliver twice the bandwidth and 10x better energy efficiency compared with DDR4 memory, NVIDIA claims. Grace will also enable unified cache coherence with a single memory address space for both the system and GPU memory to simplify programmability.

    The Swiss National Supercomputing Centre (CSCS), Hewlett Packard Enterprise (HPE) and NVIDIA announced today they will employ Grace to build a supercomputer based on the HPE Cray EX that will come online in 2023.

    NVIDIA’s Jarvis Framework

    Finally, NVIDIA announced the general availability of a NVIDIA Jarvis framework, which provides developers with a set of pre-trained AI models based on deep learning algorithms that can be employed to drive interactive conversations between natural language processing (NLP) engines and end users.

    It’s also only a matter of time before every application is making use of some type of AI capability, notes Das. As such, he notes IT teams today are making strategic platform decisions that will have implications for the next several decades.

    It may be a while before the grand vision of GPUs and Arm-based CPUs laid out by NVIDIA finds its way into the average enterprise. However,  GPUs and DPUs will soon become just as common as traditional CPUs are today.

    Read next: Natural Language Processing Will Make Business Intelligence Apps More Accessible

    Mike Vizard
    Mike Vizard
    Michael Vizard is a seasoned IT journalist, with nearly 30 years of experience writing and editing about enterprise IT issues. He is a contributor to publications including Programmableweb, IT Business Edge, CIOinsight and UBM Tech. He formerly was editorial director for Ziff-Davis Enterprise, where he launched the company’s custom content division, and has also served as editor in chief for CRN and InfoWorld. He also has held editorial positions at PC Week, Computerworld and Digital Review.

    Get the Free Newsletter!

    Subscribe to Daily Tech Insider for top news, trends, and analysis.

    Latest Articles