One of the big difficulties in creating artificial intelligence (AI) is that our view of “intelligence” is largely based on our belief that humans are the gold standard. After watching politics, sports and reality TV for the last few years, I have my doubts that this is the case. However, while there has been a ton of lab work surrounding brain-like behavior in electronics with efforts like Neural Networking, building an electronic human brain has proven problematic.
Well, Intel just announced it has made a significant step in this direction with a prototype processor code named Loihi, named after the youngest Hawaiian volcano, which mimics the human brain’s basic mechanics. It makes machine learning, the process wherein machines learn from each other from their own experience, far more viable, scalable and affordable, at least on paper. The related Intel researchers are reporting a 1M times improvement over more traditional Neural Networks while improving energy efficiency by 1K.
Typically, announcements like this precede release of the actual shippable parts by five to 10 years because the part must complete testing, a software ecosystem must be developed that will use the part, and manufacturing lines must be developed and implemented.
However, if this part makes it to market, it could transform everything from smartphones to web services.
Let’s talk about the near-term future of self-taught thinking computers.
The convergence of what is basically a low-power machine learning inferencing processor and mammoth power efficiency clearly would allow far more powerful intelligence at the edge. The obvious benefit would be for something like Siri or Alexa to gain far deeper understanding of what the user wanted when asked a question and become far more proactive, with responses increasingly anticipating a need and coming up with a response unprompted.
This is the concept of morphing from something called a digital assistant to becoming an actual digital assistant with the capabilities we once had in good secretaries becoming a major feature in our smartphones. This would include creating correspondence from a verbal outline, actively screening calls, automatically setting appointments, escalating reminders for critical meetings, and capturing calendar events automatically from conversations or correspondence (email, text, social networks).
For example: recognizing that the biometrics from the user’s wearable was indicating a low sugar condition, suggesting a snack that the user would enjoy from a store or restaurant near the user’s current location, and then going so far as to recommend the best choice on the menu and then, on command, electronically ordering it so it was ready when the user entered the establishment.
Using the same location and biometric information, the smartphone could automatically not only call for help if the user was in distress but call for the right kind of help and make sure the first responders had the information they needed when they arrived, about things like blood type, allergies, etc., and could even make an initial suggestion as to what the problem is and how best to deal with it. And it could be instrumental in preventing substance abuse problems like driving under the influence or overdose, through alerting or effective intervention (like disabling the car interface and calling an Uber or 911).
And certainly, the phone could be far more effective at monitoring conversations and email to alert the user, parents, children, car giver, or spouse to possible scams, bullies, predators, depression, and other dangers that currently aren’t identified until too late.
Wrapping Up: Smarter Devices Are Coming
We are on the cusp of smarter devices. I chose smartphones as my example but this could apply to intelligence in cars (which are already on a path to autonomy), drones, robots, and systems like IVRs (Interactive Voice Response), which handle sales and support. In all cases, the results would be a more human-like series of behaviors with far more capabilities to identify and mitigate risks and problems that we might not catch or notice otherwise.
But, in the end, this is a huge potential step toward creating digital assistants, whether they are smartphones, Echo like devices, or even full-on ambulatory robots that behave and think more like we do. Assuming normal development cycles, this means that by this time next decade, we’ll likely be up to our armpits in machines that are arguably smarter than we are.
Rob Enderle is President and Principal Analyst of the Enderle Group, a forward-looking emerging technology advisory firm. With over 30 years’ experience in emerging technologies, he has provided regional and global companies with guidance in how to better target customer needs; create new business opportunities; anticipate technology changes; select vendors and products; and present their products in the best possible light. Rob covers the technology industry broadly. Before founding the Enderle Group, Rob was the Senior Research Fellow for Forrester Research and the Giga Information Group, and held senior positions at IBM and ROLM. Follow Rob on Twitter @enderle, on Facebook and on Google+