What is artificial intelligence (AI)? It’s a deceptively simple question. Quite possibly, the confusion is centered on the precise use of the word “intelligence.”
An adding machine can perform computational tasks far faster than humans. Nobody would say that it is more intelligent, however. There are different approaches to AI, but all rely to lesser or greater degree on the same sort of number crunching. Massively fast computational ability was vital in Big Blue’s victory over chess champion Garry Kasparov in 1997, IBM Watson’s win over Ken Jennings on Jeopardy! in 2011, and Google’s AlphaGo win over South Korean Go champion Lee Sedol last year.
The question of whether the term artificial intelligence is misleading is not simply semantic. It leads people to equate what these platforms do with the creative intelligence that sets humans apart. This week, three researchers — Feng Liu, Young Shi and Ying Liu — published work at Cornell University that proposed a test of common AI platforms for their actual intelligence.
The abstract does a good job of mapping out the interesting AI challenge:
To address the issue of AI threat, this study proposes a “standard intelligence model” that unifies AI and human characteristics in terms of four aspects of knowledge, i.e., input, output, mastery, and creation. Using this model, we observe three challenges, namely, expanding of the von Neumann architecture; testing and ranking the intelligence quotient (IQ) of naturally and artificially intelligent systems, including humans, Google, Microsoft’s Bing, Baidu, and Siri; and finally, the dividing of artificially intelligent systems into seven grades from robots to Google Brain. Based on this, we conclude that Google’s AlphaGo belongs to the third grade.
ZDNet provided the results of the AI testing, which was conducted throughout 2016. Google AI has an IQ of 47.28, which is equivalent to an almost six-year-old’s expected IQ of 55.5. Microsoft Bing scored 31.98 and Baidu 32.92. Siri was last with a score of 23.9. An average 18-year-old has an IQ of 97.
Forbes posted a piece by Bernard Marr on the popular misconceptions about AI. One mistake people make, he writes, is assuming that “[s]uper-intelligent computers will become better than humans at doing anything we can do.” The reality is that there are two kinds of approaches. In one, AI platforms become incredibly good at a specific narrow and specific task. The other focuses on a more general aptitude that can be applied to a wide range of jobs. The second type of intelligence, one that mimics humans, is far off in the future.
TechCrunch offers a good example of where humans thrive and machines fall down. A company called Active One Partners uses a variety of approaches to find very specialized products and services for its clients. The point is that there is a need for human researchers to go beyond the capabilities of even the most powerful search engines and use creativity to perform the task.
The answer to the question at the beginning of this post is unclear. What is clear is that at this point, artificial intelligence and human intelligence are largely different things. That may change in the future. When it does, it is time for us to worry about AI as much as Elon Musk and some other very smart people do.
Carl Weinschenk covers telecom for IT Business Edge. He writes about wireless technology, disaster recovery/business continuity, cellular services, the Internet of Things, machine-to-machine communications and other emerging technologies and platforms. He also covers net neutrality and related regulatory issues. Weinschenk has written about the phone companies, cable operators and related companies for decades and is senior editor of Broadband Technology Report. He can be reached at email@example.com and via twitter at @DailyMusicBrk.