While we spend a lot of time on the analytical parts of artificial intelligence (AI) systems, we don’t spend nearly enough time on ensuring that the data they work from is accurate and complete. Now, with the massive influx of fake news, the accuracy problem should be far more concerning than it has been. This week’s event concerning United showcases what can happen if that data isn’t complete. Since I’d raised this as a concern with AI some time ago, I’m worried that things are getting worse, not better.
At the heart of the United PR nightmare was a process that favored, in certain situations, placing employees in seats over customers, and a system that determined who got bumped from a flight by trip statistics, if the customers didn’t take what United offered in exchange. One of the big and often unmentioned problems with deep learning AI systems is limitations on the data that they look at, which can lead them, just like people, to make highly undesirable decisions. But, unlike people, it can happen at light speed.
Let me walk you through my thinking.
United Data Problem
As noted, the program used to determine which passengers would be asked to exit the flight looked only at the flight data in the United system. Based on this, those the system thought would be least inconvenienced would be selected. This approach was flawed because it didn’t factor in the customer’s assessment of the cost of missing the flight: The doctor needed to be at work the next day, and another passenger may have been heading to a critical meeting worth millions to the firm paying for the passenger’s trip. The full potential for any one of the passengers to refuse to depart couldn’t be assessed.
In addition, the cost of the decision, reflected as risk, wasn’t captured. In this case, the cost to United will likely eventually pass eight figures. There is, today, always a risk when you displace a flying customer that they have or can develop a social media following and will use it, and that should be factored into the cost side of the decision. In this case, the CEO’s job was put at risk, as well.
Now, a red flag should have gone up as soon as not all the required seats could be secured with the reimbursement voucher amounts offered. That outcome should have resulted in an escalation to a higher power who could determine what was going on before a crisis event occurred. This too needs to be part of the implementation so that if an expected perimeter range is exceeded, the AI fails over to a human, or a secondary system, that can analyze why the perimeter was exceeded and if that should force a policy change.
Wrapping Up: The Danger of AI
As we create ever more capable AI systems that can emulate human thinking, we have to be careful that it isn’t just the speed of the decisions that is being enhanced. Cascading mistakes at light speed could result in catastrophes. We have to work to not only make these systems faster and more human-like, but also far better than we are in collecting all of the necessary information before a decision is made, and at assuring its accuracy. If we don’t, we are likely to see a plague of cascading mistakes happening too fast to address or recover from, and that is far from the positive outcome we have so far generally anticipated.
Rob Enderle is President and Principal Analyst of the Enderle Group, a forward-looking emerging technology advisory firm. With over 30 years’ experience in emerging technologies, he has provided regional and global companies with guidance in how to better target customer needs; create new business opportunities; anticipate technology changes; select vendors and products; and present their products in the best possible light. Rob covers the technology industry broadly. Before founding the Enderle Group, Rob was the Senior Research Fellow for Forrester Research and the Giga Information Group, and held senior positions at IBM and ROLM. Follow Rob on Twitter @enderle, on Facebook and on Google+