How AI Will Be Pushed to the Very Edge

    The race to push deployment of artificial intelligence (AI) models beyond the network edge is officially on. Qualcomm this week announced it has signed a definitive agreement to acquire NUVIA for approximately $1.6 billion as part of an effort to advance adoption of AI on everything from smartphones to autonomous vehicles.

    NUVIA has been developing what it describes as a new class of processors optimized for machine and deep learning algorithms. Qualcomm plans to add that processor to a portfolio of graphics processing unit (GPU), AI engines, digital signal processors (DSP), and dedicated multimedia accelerators that Qualcomm is making available alongside its Snapdragon CPU for mobile phones and various classes of embedded systems that are using to drive Internet of Things (IoT) applications.

    The current Qualcomm Snapdragon processors are based on a design for Arm processors, so the expectation is the NUVIA processors will be based on a similar architecture, says Stephen Di Franco, principal analyst for the IoT Advisory Group, an IT consulting firm.

    Also read: AI to Become Mainstream in 2021

    Training AI Models

    One of the major challenges ahead is driving AI out to various types of edge computing platforms where data is being collected and processed in real time. The inference engine of an AI model is typically deployed on an edge computing platform. However, it’s clear there will be a need to train AI models in near real time on edge computing devices as data is increasingly processed and analyzed at the point where it is being created and consumed.

    “You’re going to want the edge device to be able to do some learning,” says Di Franco.

    The challenge is the amount of power an AI model requires to run on those devices, says Di Franco. Most of the devices deployed at the very edge don’t have enough battery power to effectively support an AI model that can be continuously trained, adds Di Franco.

    Most training of AI models today occurs in the cloud because of the amount of data that machine and deep learning algorithms require to be initially trained. However, in time many of those AI models will need to be updated on the edge computing devices where they are deployed.

    AI Timeline

    Given the timelines for building next-generation processors capable of learning at the edge, it may be awhile before anything other than an inference engine is deployed on these devices. In the meantime, IT organizations should expect providers of rival processors to be moving down the same path at Qualcomm. Once those processors become initially available, data science teams and application developers will begin their work. That may mean that AI applications running on edge computing that are capable of learning, for example, an individual’s personal preferences in real time may not appear until the middle of the decade or beyond.

    In the meantime, AI will continue to be infused across a wide range of enterprise applications. It just may not be making it all the way out to the very edge for quite some time to come.

    Also read: Rush to AI Exposes Need for More Robust DataOps Processes

    Mike Vizard
    Mike Vizard
    Michael Vizard is a seasoned IT journalist, with nearly 30 years of experience writing and editing about enterprise IT issues. He is a contributor to publications including Programmableweb, IT Business Edge, CIOinsight and UBM Tech. He formerly was editorial director for Ziff-Davis Enterprise, where he launched the company’s custom content division, and has also served as editor in chief for CRN and InfoWorld. He also has held editorial positions at PC Week, Computerworld and Digital Review.

    Get the Free Newsletter!

    Subscribe to Daily Tech Insider for top news, trends, and analysis.

    Latest Articles