More

    NVIDIA Showcases Impressive Advances in Data Science, HPC, VR

    This week, I’m at the GTC Keynote (GTC stands for GPU Technology Conference), given by NVIDIA CEO Jensen Huang. This is a big event this year. NVIDIA has been moving aggressively into the AI, data science, HPC, robotics, and automotive spaces and this is a showcase for how far it has come.

    With the collapse of Intel’s IDF (Intel Developer Forum), NVIDIA’s event has filled the gap and it is now arguably the best event to get a sense for where advanced computing, and particularly AI, is. Here is my take on what Huang had to say.

    NVIDIA Progress

    Huang opened talking about the massive year-over-year increase in developers on their platform. They increased in one year around 50 percent of the number of developers they have working on their platforms. To help those developers, they are aggregating their libraries into CUDA-X. This should vastly accelerate the speed at which developers bring out new products on this platform and allow for cross-industry collaboration at a level we haven’t seen before. These libraries range from autonomous cars and robots to smart cities. This is also important because many of these things will eventually need to talk to each other, so they become kind of a collective intelligence and must interoperate.

    This all also helps the resulting solutions solve bigger problems faster. Computers without ecosystems aren’t worth much and NVIDIA is investing massively in the ecosystem. This also helps computer OEMs increase volumes while reducing costs. This is the impact of economies of scale when you create systems that are industry independent. For instance, if the same system can do both robots and cars, you get higher volumes and lower unit costs, the same if you talk about manufacturing and smart cities. Both a cost reduction and a volume increase result.

    They’ve named this PRADA, for PRogrammable Acceleration Domains Architecture.

    VR

    One of the more interesting demonstrations was on NVIDIA’s RTX platform and they showed pictures of BMWs side by side. The more real-looking pictures were rendered. The reason they looked more real was that the details were move evident than on the real picture. Even inside of the car, the leather on the rendered car looked real, down to the grain and typical leather defects. And, because this is rendered, you can instantly change materials and colors while preserving the realism of the result.

    Over this last weekend, I watched the new Netflix animated series Love, Death, and Robots (a collection of rendered short stories) and some of the rendered material looked like filmed locations, not animations, though the people still didn’t quite look right. NVIDIA is incredibly close to fixing this people problem and creating videos of virtual events that look real, in real time. (Right now, you can render photorealistic images of people, but the rendering time can run into weeks or months.)

    They showcased a game called Dragon Hound that was just amazing. This was kind of a Steampunk game with dragons and knights and it looked incredibly realistic using NVIDIA’s new ray tracing Turing architecture. They then moved to a game demonstration that was created in NVIDIA that took the legacy game Doom and then applied the RTX technology to make it look like a real place. This was one of the promises of NVIDIA’s RTX technology, taking a legacy game title and up-converting it easily to make it look like a current game. They are releasing this into the open source community so that more of these older games can come back transformed. (There goes my free time, but for some reason, I don’t have a problem with this.)

    Currently, NVIDIA has 1M architects, 3M designers, 3M artists, and 2M M&E Pros (interesting they showed the new Lost in Space Robot here) working on RTX. Image Engine then showed shots from the new Lost in Space TV show (I’m looking forward to the second season). Year over year for the same level of rendering, they have gone from 25 nodes, 38 hours, $70K in power and $250K in total cost to 1 node, 6 hours, $10K, and a total cost of $30K (this was an example that seemed to come from the Pixar film Incredibles 2).

    Apparently, 200 studios are now collaborating to advance the state of the art by driving down cost, increasingly productivity, and massively increasing movie output. It strikes me the rate of advancement here is nearly beyond belief.

    To help with this, NVIDIA has created a tool called Omniverse, a unique collaboration platform for Real-Time Graphics Professionals. This allows for a distributed group of graphics artists and animators to work together in real time from wherever they are in the world. Working together remotely, they can all see what their peers are doing and interact with the product in real time. This is data center graphics.

    Huang spent some time talking about the GeForce Now platform, which is effectively a cloud-based gaming computer. We’ll be seeing a lot more of these targeted cloud services in the future because not everyone knows, or wants to know, how to provision and configure a cloud instance. On NVIDIA’s service (in beta), they have 300K players and a waiting list of 1M more. One of the things they once again discovered is that you must distribute the data centers for a service like this, otherwise latency and bottlenecking will kill the service through poor performance. They are going to ramp this service as part of the 5G rollout because 5G is very low latency and ideal for gaming in the cloud. Softbank in Japan and LGU in Korea are early adopters for this new service.

    This is based around NVIDIA’s RTX server, which has 40 Turing GPUs and is optimized for remote workstation and remote gaming applications. In RTX pod, they can service 10 thousand concurrent users.

    Data Science

    Huang moved to talking about how they are advancing data science and they are now able to solve problems that were impossible. This goes from data, to analytics, which puts the data into buckets called features, then this is processed by an AI framework into a predictive model that then goes through an inference engine based on deep learning. You end up, assuming the data is complete and accurate and the process isn’t biased, into an accurate prediction or answer. All these components are being driven into hardware, ranging from workstations and servers from every major hardware supplier to every major cloud service company.

    There was an interesting demonstration by Microsoft Bing using this technology where complex queries presented verbally resulted in verbal responses and visual confirmations, with key phrases highlighted.

    NVIDIA is working with the large workstation vendors to sell a targeted line of workstations for data scientists. They are also announcing with server manufactures servers uniquely designed for data scientists. (I think they should have done this as a joint project with IBM, which I’ll go into more detail on at the end.)

    Medical

    One of the big areas for AI is in the medical market and NVIDIA has created a Clara AI toolkit. This is basically an AI that builds other AIs targeting medical problems. This can go from concept to deployable model “easy peasy” (apparently, this is a new technical term…). Currently, it is being used heavily in radiology for more timely and accurate diagnosis.

    HPC

    Walmart is apparently here, and will be talking later in the week on how they are using NVIDIA’s technology effectively in their HPC (High Performance Computing) deployment at the store. One of the large telephone carriers is adding 10K Wi-Fi access points a month and they are using NVIDIA’s AI technology to ideally place them worldwide. To do this, they are consuming 1 Terabyte of data daily to do the analysis.

    Mellanox

    Huang brought Mellanox on stage (NVIDIA just bought Mellanox) to talk about the critical nature of interconnect. He spoke to how their technology is being used to increase the speed and efficiency of scaled-out AI as implemented in the data center.

    Other Stuff

    Huang went on to talk about robotics and autonomous cars, but I missed that because the keynote was supposed to run two hours and instead ran nearly three and I had other commitments. I’m an ex-competitive speaker and ending on time was kind of drilled into my DNA so this was a tad frustrating. I really wanted to see those segments.

    Wrapping Up: Watch NVIDIA’s Gains in AI

    NVIDIA is making big gains in computer science and AI. And, for a time, their most dangerous competitor, Intel, was self-destructing due to a bad CEO and a clueless board. Intel still has the board, but the new CEO appears to have a clue and seems to be getting his shop in order so that unique window is closing. NVIDIA’s solutions, at least the AI and computer science solutions, depend on Intel technology and Intel’s plan is to eventually make NVIDIA redundant with an Intel graphics team that, for once, appears to be adequately resourced. My sense is that NVIDIA should partner with IBM which, unlike AMD, doesn’t compete with them broadly, to create a unique server and workstation hedge using as a weapon technology like NVLink on Power to push the performance envelope even farther and buy the company more breathing room as Intel struggles to catch up.

    Having said that, NVIDIA’s tools targeting medical research could very well save your and my life at some point and their level of execution (except for ending keynotes on time) is exceptional. I’m looking forward to seeing what else the company will show me this week.

    Rob Enderle
    Rob Enderle
    As President and Principal Analyst of the Enderle Group, Rob provides regional and global companies with guidance in how to create credible dialogue with the market, target customer needs, create new business opportunities, anticipate technology changes, select vendors and products, and practice zero dollar marketing. For over 20 years Rob has worked for and with companies like Microsoft, HP, IBM, Dell, Toshiba, Gateway, Sony, USAA, Texas Instruments, AMD, Intel, Credit Suisse First Boston, ROLM, and Siemens.

    Get the Free Newsletter!

    Subscribe to Daily Tech Insider for top news, trends, and analysis.

    Latest Articles