The highest-profile use case of computer vision to date is autonomous vehicles (AVs). Clearly, the ability to stop, change lanes and otherwise guide a car or truck through traffic requires what in essence is keen vision, whether established by retinas and irises or cameras and remote sensors.
AVs, however, are just the tip of the iceberg for computer vision. ABI Research reports that computer vision is a far broader category than AVs, and one that is gaining traction. The firm says that by 2022, more than 650 million devices will support advanced vision applications.
The firm points to newly released cameras from Apple (the iPhone X), Huawei (the Mate 10) and Google (the Pixel 2) and the Google Clicks camera as devices leading the way. Examples of applications based on advanced computer vision are Apple Face ID, Apple Animojis (for social networking) and Google Clips (for content).
ABI also says that much of the computing necessary for this ambitious new world will be done in the devices. This is likely due to the latency created by sending data to the cloud, processing and sending it back to the device. Thus, this is a big opportunity for chip makers and related firms. There likely will be both new and familiar names. One of the newbies focusing on computer vision, Ambarella, was recently profiled at Equities.
The applications are starting to emerge. For instance, computer vision is an element of Intra-Logistics with Integrated Automatic Deployment (ILIAD), a European project aimed at using automated guided vehicles (AGVs) in warehouses for tasks such as packing, palletizing and transporting goods, according to Photonics.
The AGVs will use artificial intelligence (AI) and be self-learning. They will work in close proximity to their flesh and blood co-workers and, thus, must be acutely sensitive to where they are at all times. The project spans the UK, Sweden, Italy and Germany.
A report at Equipment World on the keynote addresses at the Association of Equipment Management Professionals’ Equipment Shift Conference suggested how advanced technologies, including computer vision, will work together. The address delivered by Prakash Iyer, vice president of software architecture and strategy at Trimble navigation, laid out a future in which efficiency and productivity are greatly expanded. The keys, in addition to computer vision, are AI, virtual reality and the Internet of Things (VR and IoT). At least six other technologies play a subsidiary role, Iyer told the group.
Computer vision is not new, of course. The runup to AVs, which deeply rely on these techniques, has been ongoing for several years. It seems, however, to be gaining its own identity as a standalone discipline that can be deployed in many different ways. That higher profile likely will spur investment and development.
Carl Weinschenk covers telecom for IT Business Edge. He writes about wireless technology, disaster recovery/business continuity, cellular services, the Internet of Things, machine-to-machine communications and other emerging technologies and platforms. He also covers net neutrality and related regulatory issues. Weinschenk has written about the phone companies, cable operators and related companies for decades and is senior editor of Broadband Technology Report. He can be reached at firstname.lastname@example.org and via twitter at @DailyMusicBrk.