More

    Can You Make People Trust the Autonomous Car?

    I just attended an interesting discussion from Intel from Jack Weast, the senior principal engineer and chief systems architect at the Intel Autonomous Driving Group. Apparently, 75 percent of U.S. drivers, according to the AAA, are afraid of riding in a self-driving car. The cause, according to the Intel’s research, is that people distrust things that make decisions based on mathematical means and thus inherently distrust computers that make decisions. This is certainly problematic if the entire automotive industry is moving to computers driving cars, flying planes or helming ships.

    I’ve been worried about this for some time myself. If people don’t trust this technology, and according to that AAA survey, 78 percent don’t, then the total available market for self-driving cars is only 25 percent of the existing market. That is not a formula for success because we need to get to a critical mass of self-driving cars quickly if the full safety benefits are to be realized.

    So, Intel did a 10-person focus group to understand this problem. Here the results and my thoughts on them.

    Study of Attitudes to Self-Driving Vehicles

    While they called this a study, at 10 people, this was more of a focus group, and those can sometimes be problematic. With this group, they found seven areas of tension. Being engineers, they looked at these tensions and factored them into technology direction though, I think, they likely should have also considered psychological methods to address these concerns.

    The people were taken through a journey in a self-driving car. Basically, the car worked like an Uber in that the car was requested and monitored by an app on their smartphone. The seven areas of tension were:

    Human vs. Machine Judgement

    People believed that certain kinds of driving decisions, like what to do in a traffic circle, need humans, not machines, as they are more of a negotiated decision than an absolute one. On the other hand, computers won’t be distracted and may be safer in practice. This is the conflict between bad human decisions and a perceived lack of empathy in machines.

    This “empathy” issue seems to come up a lot. A common question is, “what if a car is given the choice between killing a group of kids and saving the driver?” In practice, that choice would be far rarer for a computer driving legally than a human driver who is likely not. And, often or not, the human driver wouldn’t have the time or skills needed to make the distinction, anyway.

    Personalized vs. Lack of Assistance

    Group members thought that it would be great to be able to have the car deliver their kids, or be able to do work while being driven by the car, however, if there is no one in the car, where is the accountability? For instance, who is keeping the kids from misbehaving or damaging the car?

    This seems to support the idea that the most successful approach may be a service rather than car ownership. You don’t worry about who cares for the Uber that picks you up, but your own car driving and parking itself unmonitored would concern me, as well.

    Making Me Aware vs. Unburdening Me from Being Aware

    Group members were worried about the car making decisions without any input. For instance, if the car suddenly rerouted, people wanted to know why. Or if the car suddenly locked the doors, they wanted to understand why they were being locked in. However, people quickly got to the point where this kind of information became annoying and they didn’t want the information anymore. So initially, voice information was helpful, but eventually problematic.

    This suggests that very quickly the need to have the car tell you why it was doing what it was doing would become obsolete. People would simply get over it. But initially, cars may need this as a feature that you’d eventually turn off as people grew comfortable.

    Giving Up Control of Vehicle vs. Gaining New Control of the Vehicle

    People really didn’t like being in the backseat of a four-seat car with no one at the wheel. What if something happened and they had to intervene? People wanted to be in control if they saw the controls, but if those controls were eliminated, they appeared to more comfortable. This supports Google’s conclusion that self-driving cars need to lose driver controls because they make passengers uncomfortable.

    This is going to be a problem for Toyota’s approach to creating technology that enhances drivers because it suggests that the controls needed will become a problem for some use cases. However, this might only mean you’d have choices with some cars, targeting those who wanted to drive (mostly two-seat sports cars) and others fulfilling the role of people carriers (mini-vans, sedans, etc.).

    How It Works vs. Proof It Works

    People did want to know how the technology worked. However, when given a choice, they tended to want to watch where the car was going instead. While the technology was interesting to them, they used their own visual confirmation that the car worked as the stronger element.

    On this, I wonder about focus group contamination because, typically, buyers don’t really want to know the mechanics of what they buy, they just want to know it works. How many know how their microwave or washing machine works? Clearly, there are those who do, but they are generally overwhelmed by people who just want to know which button to push to get the thing to work. In addition, often when presented with technology in detail, consumers get overwhelmed or scared by it, causing them to move away from, not toward, the sale, so focusing on the proof over the how should have stronger sales results.

    Tell Me vs. Listen to Me

    The cars did tell folks what was going on but they then wanted an Amazon Echo-like experience so they could tell the car what to do or make queries with the car. This suggests that the car will need to evolve into the world’s most expensive digital assistant. This shouldn’t be a surprise, though doing this so it didn’t become annoying will certainly be an initial, if not ongoing, challenge.

    An idea would be to have a Facebook-like verbal social media feature in the car so you could potentially have chats with other people also driving down the road.

    Following the Rules vs. Human Interpretation of the Rules

    Group members wanted the car to be able to break the rules like they did, which means folks got a tad annoyed with a car that followed the rules. This is likely going to be a problem, particularly with safety. The autonomous car, to be safe, needs to be very rule compliant, but if folks want it to speed or drive unsafely, they are going to have issues with the experience. In effect, they want the car to drive aggressively, as they do. Now the study group did come around to the idea of the safe driving practice, but I expect this was the result of prompting by the folks controlling the study. That kind of behavior change rarely comes naturally and suggests a heavy marketing effort to focus people on safety will be important to success.

    Conclusions

    Trust is important and trust doesn’t exist when it comes to autonomous cars with most potential buyers. If something doesn’t change, the ramp to self-driving cars will be long and tend to fall below expectations. However, these perceptions can be changed. This focus group did come around to the technology reasonably quickly, suggesting these concerns can be mitigated. It is interesting to note that just seeing that the cars work was one of the strongest methods of convincing people they did work and were safe.

    My Thoughts

    I see this as more of a marketing problem than an engineering problem. Typically, if the problem is perceptions, while modifying the product to address concerns is certainly important, more important is to directly address the perception problem with marketing. People really didn’t want screen smartphones, and they didn’t sell well at all, until Apple marketed the benefits. Now folks generally don’t want anything else.

    This study also suggests, given the test example was like Uber, that Uber’s concept that folks won’t need to own self-driving cars in the future may be well founded. However, given that perceptions aren’t yet set, the industry should likely decide whether it wants individuals to buy cars or use an Uber service rather quickly and before perceptions are changed. Once folks are convinced they no longer need to buy a new car, getting them to flip back will be problematic. Toyota, for one, wants the industry to take a direction where self-driving technology is used as a driver enhancement more than a driver replacement to maintain current car sales. The direction the industry develops, and markets, could have a lot to do with whether people even buy cars anymore.

    Wrapping Up: Managing the Perceptions, Building the Market

    This focus group confirms that there is a problem is selling autonomous cars to people today and that it can be addressed. I think it will have more to do with using marketing to change hearts and minds. Focus on the benefits of higher safety, more free time, and, if appropriate, the cost savings of not actually having to own a car would do the most to get the market ready for the eventual arrival of a self-driving car.

    Apple clearly showcased with the iPhone that perceptions and consumer views can be changed if the effort is made to change them. That is nothing new, however, there is a tendency for engineering companies to focus on fixing the product and not funding this perception change. I think that will be problematic. If there isn’t a well-funded effort to change buyer perceptions before the first autonomous car arrives, the total available market will start at about 25 percent of potential.

     

    Rob Enderle is President and Principal Analyst of the Enderle Group, a forward-looking emerging technology advisory firm.  With over 30 years’ experience in emerging technologies, he has provided regional and global companies with guidance in how to better target customer needs; create new business opportunities; anticipate technology changes; select vendors and products; and present their products in the best possible light. Rob covers the technology industry broadly. Before founding the Enderle Group, Rob was the Senior Research Fellow for Forrester Research and the Giga Information Group, and held senior positions at IBM and ROLM. Follow Rob on Twitter @enderle, on Facebook and on Google+

    Rob Enderle
    Rob Enderle
    As President and Principal Analyst of the Enderle Group, Rob provides regional and global companies with guidance in how to create credible dialogue with the market, target customer needs, create new business opportunities, anticipate technology changes, select vendors and products, and practice zero dollar marketing. For over 20 years Rob has worked for and with companies like Microsoft, HP, IBM, Dell, Toshiba, Gateway, Sony, USAA, Texas Instruments, AMD, Intel, Credit Suisse First Boston, ROLM, and Siemens.

    Get the Free Newsletter!

    Subscribe to Daily Tech Insider for top news, trends, and analysis.

    Latest Articles