More

    NVIDIA GTC Keynote: It Is Going to Be an Amazing and Very Scary World Shortly

    This is a week of shows with Dell EMC World, Microsoft Build, and NVIDIA GTC all in the same week. Thank heavens for pre-briefing and streaming, otherwise I’d need to clone myself as each of these shows has must-see elements. One of the most interesting is NVIDIA’s GTC because it is the most focused, primarily on two massively disruptive technology advancements: autonomous robotics (mostly focused on cars for now) and artificial intelligence. The two are related because they are basically about making machines smart, with the differences only in the type of machines they are focused on. AI is largely focused on machines that don’t move and autonomous robotics is focused on machines that do.

    This show is known for the keynote that company founder and CEO Jen-Hsun Huang does every year, largely because it is concept- and not product-rich and truly sets the tone for the event. NVIDIA just released its latest financials and the firm is on fire, not like Samsung, but in a good way. This focus on machine learning has made the company the envy of its peers and we are just at the start of this wave.

    Let’s talk about some of my takeaways from the keynote.

    Ramp of AI Learning

    Apparently, interest in machine Learning in the top schools is growing around 10x year over year, In many of the technical schools, it is the most popular course offered.  Since students often fuel startups, that suggests that we are going to be up to our necks in ever more intelligent machines, those that move and those that don’t, and fortunately for NIVIDIA, it has the current preferred platform for this market.

    Currently, NVIDIA is working with 1,300 startups developing deep learning solutions. What NVIIDA is suppling is a blend of access and funding and the program that was created to support this is only a year and a half old. Some of the companies it is mentioning I’ve met with at earlier events, and they range from companies that have developed a product that can selectively apply fertilizer and insecticide at a plant by plant level, massively increasing yields, to firms that can analyze pictures at massive scale and provide real-time actionable results.

    Tesla Volta V100

    The Tesla Volta V100 was announced, which apparently showcases the state of the art for processor size and density using photolithography. There are 5,000 cores in the part using a new kind of possessor called Tensor Flow. Apparently, the R&D budget for this was $5B, yes that is $5B. The specs on this are off the chart. Designed specifically for deep learning and artificial intelligence, this showcases what will likely be a new processor war between vendors building parts specifically for this purpose. This is a hyper parallel part, parallel by parallel. This part ranges from 1.5 to 16x faster than NVIDIA’s prior part, depending on task.

    We saw a near movie quality video game trailer created to showcase the part that was created in 10 days, reminding us that at its heart, NVIDIA is still a gaming company.

    Amazon

    Amazon was brought on stage to discuss its AI efforts, which were targeted at everything from fulfillment, to Echo, to AWS features and capabilities. Apparently, machine learning drives the product discovery feature on the Amazon retail site. I wasn’t aware of this, but apparently through AWS, many of the services that Amazon has created were with AI and deep learning, and they’re placed on the AWS site for access by developers to advance the practice and create their own products. Another couple of data points explain why the company is on stage: Apparently, it has the largest GPU-driven cloud instance and it has also been the fastest growing. I didn’t know this either, but a lot of the autonomous driving early development and testing is done now on AWS.

    DGX1V

    This new deep learning focused, rack-mounted server has the performance of 400 traditional servers and it costs around $149K. They also announced a new form factor, this is a PC version of the DGX solution, basically a dedicated deep learning workstation. This is called the DGX station, developed to address an internal need, and it has been so popular that they externalized it. This is one expensive workstation; it prices out at $69K. Talk about workstation envy.

    HGX-1

    This product, the HGX-1, is targeted at the public cloud, customers like Amazon, Google, IBM, or Microsoft providing this class of service. Microsoft, on stage, talked about the advantages of this (which was problematic because Microsoft’s Build conference is this same week). The rep was there to talk about the AI laboratory inside Microsoft, which is designed to infuse AI across the entire Azure platform. This was recently showcased in the Skype real time translator and the recently launched AI batch service. Boy, this suggests at some future point we’ll be measuring services like AWS, Azure and SmartLayer by how smart they are and sadly, by that time, they’ll all rank higher than we do.

    Tesla Volta for Inferencing

    There has been a lot of focus on inferencing for deep learning and the company is launching a Tesla Volta V100 card for server based inferencing. This is for high-density inferencing applications; once a deep learning application is trained, it is passed to an inferencing platform to execute at scale. Training is where most of the cost of deep learning is; inferencing is where the benefits and money are.

    NVIDIA GPU Cloud Platform

    NVIDIA has its own cloud service for developers wanting to work in its cutting-edge hardware. It is designed from the ground up to be a hybrid implementation and developers can move between this service and on-premises NVIDA based solutions like the DGX. This goes beta in July.

    AI at the Edge

    Everything that moves will eventually be intelligent. This is an interesting position and the foundation is the need for many industries to keep up with the Amazon effect. Moving to cars, there are three external parking spots for every car, suggesting massive waste. To address this problem, NVIDIA created NVIDIA drive, an open software stack for those developing autonomous driving programs. It had the best demonstration of the “Guardian Angel” feature I’ve seen so far. The driver attempts to accelerate into an intersection with a green light, but the system disables the accelerator because it sees a car that ran the light, avoiding a nasty accident. Apparently Toyota, which is championing “Guardian Angel,” has announced it is going to use NVIDIA’s Drive PX in its cars. It also announced it would open source the Xavier DLA (Xavier is the heart of the Drive PX).

    Robotics

    One of the things that has bothered me about the robotics segment is that it isn’t spoken about properly. This isn’t the autonomous cars, drones or planes segment. These are all robots and you’d think these companies would get that the technology they are developing has far broader applications than they initially seemed to realize. Well, apparently, NVIDIA sees this too, and it’s moving its technology to this broader category. It has created the Isaac Robot Simulator. With this simulator, you can create a virtual brain and then load that brain into a physical robot for final training, massively speeding up time to market. It showcased group learning with multiple robots in final training; they take the smartest one and use its brain in all the rest, repeating that until they get to an optimal result. Eventually, every robot is as smart as the smartest.

    Wrapping Up: Fast Movement and Progress

    We are clearly at the forefront of a massive change with performance advancements that are starting to run at 10x or better in many areas per year. We’re seeing deep learning capabilities that range from cloud services to super workstations accelerating advancement, coupled with a massive ramp up in related classes and students coming out of schools with the necessary critical skills. We’re also seeing the movement of this effort from focused areas like autonomous cars to generic robotics and the creation of robotic group training, making every robot as smart as the smartest. It is going to be an amazing and very scary world shortly.

     

    Rob Enderle is President and Principal Analyst of the Enderle Group, a forward-looking emerging technology advisory firm.  With over 30 years’ experience in emerging technologies, he has provided regional and global companies with guidance in how to better target customer needs; create new business opportunities; anticipate technology changes; select vendors and products; and present their products in the best possible light. Rob covers the technology industry broadly. Before founding the Enderle Group, Rob was the Senior Research Fellow for Forrester Research and the Giga Information Group, and held senior positions at IBM and ROLM. Follow Rob on Twitter @enderle, on Facebook and on Google+

    Rob Enderle
    Rob Enderle
    As President and Principal Analyst of the Enderle Group, Rob provides regional and global companies with guidance in how to create credible dialogue with the market, target customer needs, create new business opportunities, anticipate technology changes, select vendors and products, and practice zero dollar marketing. For over 20 years Rob has worked for and with companies like Microsoft, HP, IBM, Dell, Toshiba, Gateway, Sony, USAA, Texas Instruments, AMD, Intel, Credit Suisse First Boston, ROLM, and Siemens.

    Get the Free Newsletter!

    Subscribe to Daily Tech Insider for top news, trends, and analysis.

    Latest Articles