NVIDIA’s Virtual GTC 2020 Keynote: A Look At Best Practices

    This week, NVIDIA CEO Jensen Huang gave a virtual GTC keynote, and he exemplified some of the best practices when giving a talk. I’ve been watching several virtual events over a couple of months—and most are hard to watch. A political rally put on by Joe Biden looked like it was managed by people who lacked the critical skills needed to pull it off.  

    Virtual events are all we have right now, and there is no point in doing them badly. NVIDIA had the best event so far, and I thought it important to share what they did right:

    • They made the content relevant by showcasing how their technology was being used to mitigate the COVID-19 Pandemic. 

    •  They broke the keynote down into short segments, retaining interest and minimizing the waste of audience time. 

    • They used impressively strong visual aids and often used their technology to visually and structurally enhance the event. 

    • And finally, they closed strong both with an entertainment desert and a good summary of what was covered. 

    Let’s talk about all of that this week. 

    Making it relevant

    The world we live in isn’t the world that existed last quarter. The COVID-19 pandemic has changed our lives dramatically. So many of the recent events I’ve attended prioritize the need to do the event virtually, but other than that, seem to pretend the world hasn’t changed.  

    If the audience is being bombarded by concerns surrounding the pandemic, it is best to address what the firm is doing against the pandemic upfront, getting it out of the way, so that people aren’t wondering if the presenting team missed a meeting.

    Huang not only opened on that topic, but he also thanked first responders and healthcare providers on the front lines of this event. He then showcased how NVIDIA technology was being used to fight the pandemic, and the result was impressive.  

    Specialists at Oxford Nanopore were able to sequence the COVID-19 Genome in 7 hours using NVIDIA technology. Plotly, NVIDIA technology was able to demonstrate real-time infection rate analysis.  ORNL Scripps was able to screen 1B—which is one Billion—drug compounds in one day to identify those that are most likely to mitigate COVID-19 symptoms. Typically, this same work would take a year. Structura, NIH, and UT of Austin were able to create the first 3D-structured image of the Virus Spike Protein using CryoSPARC. NIH also was able to improve COVID-19 rapid classification to improve testing speed using NVIDIA technology.  

    Kiwibot Medical Supplies brought to market an autonomous medical delivery robot to safely move samples and medicines between those working on patents or researching the virus using NVIDIA’s Jetson Robotics platform. Whiteboard Coordinator, using NVIDIA technology, brought  to market a rapid AI-based elevated Body Temperature Screening System to prevent infected and symptomatic people from infecting planes, trains, buildings, and shopping complexes. (This last one reminded me of the fictional scanner that was used in the first Total Recall movie.)  

    When this segment was done, NVIDIA had established that its technology was critical to the progress in fighting this Pandemic—making the content extremely relevant.  


    Now, were I a betting man, I would have bet this was one area where NVIDIA would fail. Huang tends to run way over time, sometimes as much as 30 minutes over the time allocated. We all know that running longer than 15 minutes results in a dramatic drop in interest and attendance. 

    But that didn’t happen here.  NVIDIA segmented the talk into segments that ran under 16 minutes—many under 10 minutes. These segments were indexed so that viewers could pick the elements of interest to them. This makes the conference far more time-efficient than even an in-person event, arguably. 

    Most people that attend events like this aren’t interested in most of what a company like NVIDIA supplies. For instance, if you are a builder of gaming PCs, you probably don’t care much about self-driving cars. With this format, they could pick segments they’re professionally interested in, then come back and watch the segments that personally interest them.  

    Use of visual aids

    Many of these events consist of talking heads and nothing else.  While some include slides, speakers who generally don’t speak at any given time are on the side, becoming a distraction. A few have a combination of one speaker at a time with static slides.  

    The goal of these events is to convey information and convince people to buy something. Keynotes generally have one speaker, and that was the case here. Huang talked in front of his stove, which turned out to be decent staging in that it was visually attractive yet not distracting.  They mixed up the content between static slides, videos, and visual demonstrations.  

    This mixing up of the content kept content interesting.  More importantly, rather than just talking about the technology, NVIDIA reinforced the messages with visual demonstrations that drove the point home while keeping the audience focused on the screen and not on other things.

    What is odd to me, given most of us are working at home, is that most seem to act like everyone was watching this from offices where there were no children crying or dogs barking. The presentation must hold the audience to the screen, and for that, you need visual variety and compelling images. NVIDIA did that, and coupled with the short segments, I had no problem staying focused on the content.  

    Using products in the presentation

    Not everyone can do this. If you make cars, doing a presentation that used cars, as opposed to just showing them driving, isn’t a viable option. NVIDIA makes graphics products, so they showcased those products as part of the presentation. When talking about their advanced RTX graphics Omniverse collaborative platform, they used the RTX technology to help create demos and other presentation content.  

    You could see how teams working together could create photorealistic video content that was not only fun to watch but looked real. You couldn’t help but wonder what you might do with that technology, how it might spark your imagination, and even how you might create a uniquely magical, virtual world to explore. It was awesome, and rather than being glad I survived the talk, I immediately wanted to go back and watch some of the videos—particularly the Ray Tracing Omniverse demonstration. Here’s the segment for reference.

    One technology I expect will have a greater impact on future events is NVIDIA’s conversational AI technology, Jarvis. Using a real-time rendering and animation, they were able to demonstrate how a future Personal AI Assistant might use this technology, having a somewhat stilted conversation between Huang and the AI. I’ve often thought one of the mistakes Microsoft made with their Cortana Assistant was the lack of an animated avatar (something like this prototype showcased back in 2017).  With this technology and a screen, Jarvis could fix this. This, I expect, will eventually be the future of Personal Assistants.  

    Wrapping up: Concluding strong

    There is an age-old format we know drives better retention in audiences. That is, you summarize what you are going to say, present the detail, then close by re-summarizing. Or, tell them what you are going to tell them, tell them, and then tell them what you told them. Huang did this, but it had a more critical purpose this time. Because you could pick and choose segments, both the introduction and the close pointed the audience to elements they might be interested in. 

    One thing I’d expect next time is NVIDIA to use their Recommender Engine, one of their showcased offerings, to recommend the segments the audience should attend, and use their graphically enhanced conversational AI (Jarvis) to answer any questions in real-time that the attendee might have.  

    One more thing that is often forgotten is dessert. At a physical event, there is generally some kind of fun thing that wraps the event. For NVIDIA, at the end of the closing segment, they had a video of an orchestra playing the music from the I Am AI Video. It’s arguably one of the strongest showcases of a company’s aspirational capability I’ve seen.  

    Very nicely done. 

    Rob Enderle
    Rob Enderle
    As President and Principal Analyst of the Enderle Group, Rob provides regional and global companies with guidance in how to create credible dialogue with the market, target customer needs, create new business opportunities, anticipate technology changes, select vendors and products, and practice zero dollar marketing. For over 20 years Rob has worked for and with companies like Microsoft, HP, IBM, Dell, Toshiba, Gateway, Sony, USAA, Texas Instruments, AMD, Intel, Credit Suisse First Boston, ROLM, and Siemens.

    Latest Articles