It is CES week again, and I’m mostly looking forward to surviving. This is an amazing and horrible show; amazing because it collects a massive amount of new technology in one place, and horrid because Las Vegas uses very little of it to enhance the show. A number of technologies would make this show far more useful and far less painful. One of those technologies is artificial intelligence (AI), which could better manage our schedules, group us together for shared rides, and make sure the transportation we need to get around is available. Las Vegas should be a smart city and it isn’t. It should be one of the leading places where autonomous cars are being used; instead, it is the place that has the monorail that is equally inconvenient to everyone.
What brings this to mind is that NVIDIA is arguably the leader in technology used to make things smarter and particularly autonomous. When you finally get into a self-driving car, chances are that NVIDIA’s technology will be making it work. If there is one company that could provide the platform to turn CES from a nightmare into the largest intelligent event in the world, it would be NVIDIA.
In this, the first big keynote of CES 2019, the focus for NVIDIA is on its graphics capability with emphasis on the new RTX technology, which forms the basis of what will likely define our video entertainment future. This future will range from traditional games on monitors to the new amazing experiences being crafted for mixed reality. In a very real way, the company is creating magic, and its CEO, Jensen Huang, is the lead wizard.
Every year at NVIDIA events like this, we get closer to rendered reality. Pictures that come from someone’s imagination and, over time, look more and more like a high-resolution image of a real place. RTX uses Iterative Ray Tracing to significantly improve how light bounces around a virtual image just like it does in the real world.
The initial demonstrations of this technology showcased reflected images, shadows, destroyed buildings, pets, and people to show how close to reality we are getting. One of the interesting parts of this was a demonstration of automatic animation, or creating a character and being able manipulate it in a virtual environment, much like you would a game avatar, but this could instead be a realistic character in a movie. The potential to massively reduce the cost of making movies and games is obvious in these demonstrations, allowing studios to produce far more content with far less cost.
To get here was a 10-year process, starting with foundational elements like game engines and ending up with a new architecture called “Turing.” At its extreme, this is a massive jump in performance from 12 Teraflops with the prior Pascal architecture to 130 Teraflops with Turing. The movie demonstration using an Ironman-like character updated from last year was HD movie quality and, while it looked like there was an actor, there wasn’t. The entire image was rendered. While the earlier version of this video seemed to make fun of Intel, this one mostly pokes fun at the guy in the suit of armor and it was rather funny. The next interesting demo was for an ad for Porsche, showcasing the Porsche Speedster, all rendered. Another demo was of a fictional Russian robotics lab and that place looked amazing, with realistic robots presented like historical artifacts from an alternative reality.
Huang then spoke about the DGX2, the unique supercomputer workstation that uses Deep Neural Network technology to iterate at machine speeds and to teach itself what a rendered image should look like. One of the most amazing parts of this is the application of deep learning-based AI to lower the performance requirements for rendering and then use the AI to up convert the result to an amazing 4K experience. They showcased this in a demo of the coming video game, Anthem. In Anthem, you have a flying suit of armor and you battle across an amazing alien world (this game comes out on the 15th of next month and it is suddenly on my list of must-play games). One interesting observation is that with old games, water generally doesn’t look real, but with this new technology, the most real thing in the frame is the water; it looks like real water. They showcased this with a beautiful Chinese game called Justice.
The next demo was of Battlefield 5 using the new GeForce RTX 2060. The level of realism was amazing. (Though they clearly still need to improve on distributable landscapes because they kept shooting a sign with a tank and that sign just shrugged off those tank shells. Maybe they were Kryptonian.) The cost for this new card is $349, which sounds like a deal given this level of performance and you get either Anthem or Battlefield V (if you get the higher-end 2080 card, you get both games). It will be available in retail next week.
One of the most amazing monitors I’ve seen, the HP Omen Emperium 65” monitor (along with some smaller monitors by others), will be using a new G-Sync technology from NVIDIA. (I think I may have started drooling at this point as I have serious lust for this monitor.) One interesting announcement is that NVIDIA has created drivers for monitors that use adaptive sync that will effectively turn them into G-Sync like monitors for no additional charge.
Other announcements include Autodesk Arnold interactive rendering, 8K video editing for Red cameras, single PC broadcasting, and Turing Vrworks with HTC. In addition, they announced 40 new notebooks using RTX technology that will be available at the end of the month and many of these are extremely thin and light for a gaming laptop. NVIDIA is calling this Max-Q design. This is taking a gaming/engineering laptop that is traditionally 51mm thick down to 18mm thick. (The showcased laptop was the MSI GS65 stealth, which outperforms a GTX 1080 equipped desktop.)
Wrapping Up: Dreamweaving
Some years back, I wrote a short story as part of a Science Fiction Prototyping effort put on by Intel. This was a process to predict the future using storytelling and the term I introduced was Dreamweaving. It was a process where a person imagined and digitally created imaginary worlds that consumers could experience, and I still believe that is in our future. What NVIDIA showcased today was a huge step in that direction. Granted, there is no neural interface yet, but the ability to turn imagination into photorealistic images took a huge step and with each step we get closer to the potential of a tool that you or I could use, much like we use a PC to write a book (or post), to share our imaginations more realistically with an audience. On YouTube today, you can increasingly find very good imaginative videos (here is a good example) created through existing tools, often as university projects. As the cost and required skill levels for these tools continue to drop, we accelerate toward that Dreamweaving potential where our dreams become real for others. There is a series on Netflix called Maniac, which explores the potential for this applied to mental health.
While I still think it would be wonderful if Las Vegas applied some of the technology CES is showcasing more aggressively about crowd management, there is little doubt that the technology NVIDIA is showcasing will make it on to the stages here near term. We are on the cusp of real magic and NVIDIA’s CEO Jensen Huang is one of the leading wizards.
Rob Enderle is President and Principal Analyst of the Enderle Group, a forward-looking emerging technology advisory firm. With over 30 years’ experience in emerging technologies, he has provided regional and global companies with guidance in how to better target customer needs; create new business opportunities; anticipate technology changes; select vendors and products; and present their products in the best possible light. Rob covers the technology industry broadly. Before founding the Enderle Group, Rob was the Senior Research Fellow for Forrester Research and the Giga Information Group, and held senior positions at IBM and ROLM. Follow Rob on Twitter @enderle, on Facebook and on Google+