At the Google Pixel 2 phone announcement, which likely should have been “Google Announces Its Strategy to Put Apple out of Business” given that it bracketed Apple’s strategic products, it also announced that it was shifting from a mobile first to an AI-first world. And that it would not only be building every future product with an AI-first requirement, but rebuilding every existing product with that requirement, as well.
Given Google’s size and scope and its massive investment in AI research, I think this could eventually be a world-changing event. I’m not sure that it is entirely a good thing.
Let’s talk about what an AI-first strategy should, and likely does, mean and why I expect most tech companies to eventually follow Google’s lead.
The Traditional Problem with Computers and Technologyhttps://o1.qnsr.com/log/p.gif?;n=203;c=204663295;s=11915;x=7936;f=201904081034270;u=j;z=TIMESTAMP;a=20410779;e=i
Going back to the beginning of the industrial revolution, one of the big problems with ever more advanced tools is that they weren’t designed around how humans did things; they forced human users to learn new skills and new ways of doing things so that work volume could be significantly increased.
With computers, this required people to learn new languages, and understand how to form logical arguments the way computers could understand them. Statements that seemed obvious to us, like “turn out the lights,” either wouldn’t be understood or, rather than the obvious conclusion of turning out the lights in the room, might result in turning off the lights in the entire building.
While we thought of computers like electronic brains, they weren’t. They were simply ever more complex non-thinking machines that could only do, and repeat, specific events they were programmed to carry out.
Granted, as we moved from switches, to cards, to tape, to screens, they got easier to use and learn, and more capable, but still users had to learn how to conform to how these systems communicated and functioned. The systems didn’t conform to the people using them without a ton of extra work often required from those same users.
The concept of AI first places the AI in the place of the human operator today. If we are talking cars, the AI drives the car. If we are talking digital assistants, the AI works the virtual keyboard and puts in the query. If we are talking AI photography, the AI does the labeling and indexing.
The human still provides the direction but just by speaking to the device or typing in commands or questions in a normal conversational mode. The AI translates what the human types into something the machine understands and then the machine does the work.
In effect, the AI becomes the universal translator in a very real sense because the AI is not only capable of translating what the human user wants into language the computer understands, but increasingly translating in real time, with nuance, between human languages as well.
So, in its ultimate form, the AI becomes the ultimate servant, or major domo, who translates what the user wants or says into a form that machines or other humans understand.
Expansion of the Google Model and Danger
Right now, for most of us, Google is already kind of a universal interface into the web, translating our query into an address that a DNS server understands to get us to where we want to go. They effectively control much of what we see on the web and the unexpected result is that they not only control and monetize most related advertising, but we’ve had a massive increase in fake news and are more susceptible to manipulation.
If the company becomes our interface into everything, our perceptions of the world around us will be even more vulnerable to outside control and that could be a very bad or very good thing, depending on what our views were and who exercised that control.
Wrapping Up: Power Is Double-Edged
The AI-first model is clearly the path that the world has been on for some time and Google has just expressed a desire to lead us down that path. The result will be far more efficiency, far less required training in how to use a system with AI interfaces, and likely far more satisfaction with what we do and what we see. However, we could also be increasingly controlled and that puts an awful lot of power into one place. And we all know what happens when humans get ultimate power.
Suddenly, I’m not all that sure this is entirely a good thing.
Rob Enderle is President and Principal Analyst of the Enderle Group, a forward-looking emerging technology advisory firm. With over 30 years’ experience in emerging technologies, he has provided regional and global companies with guidance in how to better target customer needs; create new business opportunities; anticipate technology changes; select vendors and products; and present their products in the best possible light. Rob covers the technology industry broadly. Before founding the Enderle Group, Rob was the Senior Research Fellow for Forrester Research and the Giga Information Group, and held senior positions at IBM and ROLM. Follow Rob on Twitter @enderle, on Facebook and on Google+