Worrying About Hurting a Computer's Feelings

Don Tennant

Do you interact with your computer as if it were a person? Of course not, right? You're an IT professional who's far too tech-savvy to anthropomorphize a machine the way a child might, no? Well, don't be so sure.

 

According to Clifford Nass, a professor at Stanford University and author of the book, "The Man Who Lied to His Laptop-What Machines Teach Us About Human Relationships," you subconsciously treat your computer as if it had human emotions. The book is the culmination of years of research that found that virtually any observation of social interaction between people can be applied to people's interactions with computers.

 

A lot of what Nass got across in his book was condensed into a great piece he wrote for The Wall Street Journal last month. Here's an excerpt:

If you were asked how much you liked, say, a plate of lasagna, you would undoubtedly say nicer things to the chef than you would to a person who had no connection to the chef. This would be the polite thing to do. Would you also be overly nice to a computer that tutored you for 30 minutes and then asked how well it taught you?

 

To find out, I ran an experiment at Stanford University. After being tutored by a computer, half of the participants were asked about the computer's performance by the computer itself and the other half were asked by an identical computer across the room. Remarkably, the participants gave significantly more positive responses to the computer that asked about itself than they did to the computer across the room. These weren't overly sensitive people: They were graduate students in computer science and electrical engineering, all of whom insisted that they would never be polite to a computer.

I spoke with Nass earlier this week, and we had a fascinating discussion. I understood that our anthropomorphizing of computers enables us to use the interactions we have with our computers to study human interaction, but I wanted to know why we anthropomorphize them in the first place. Nass explained that it's an involuntary phenomenon:

We've evolved in a world in which anything that could talk like a person, or use language, or express emotion, was human. Now, along come these tricky 20th- and 21st-century technologies, and our brains never evolved an on/off switch that says whether it's a machine or a person. Computers tap into the most fundamental, primitive ways our brains work, and tell us, "Hey, this is something worthy of social interaction."

Nass' finding that people tend to give software a higher approval rating if they evaluate it on the computer they use to run the software than they do if they evaluate it on a different computer begs an important question: Is an evaluation done on a different computer a more accurate measure of the quality and usability of the software? Nass:

Yes, it definitely is. There have been cases where a company tests its products against someone else's, and it will have people answer questions about its software on the same computer, and the other guy's [competing software] on a different computer, to improve its results, rather than to get the right answer. There's a difference between getting the results you want and getting the right answer.

That there are those who would take advantage of our unwillingness to hurt our computer's feelings isn't nearly as surprising to most of us as the unwillingness itself. Now if we can just get computers to show us how to avoid disparaging others, even when-no, especially when -- we do it behind the backs of those we disparage.



Add Comment      Leave a comment on this blog post

Post a comment

 

 

 

 


(Maximum characters: 1200). You have 1200 characters left.

 

null
null

 

Subscribe to our Newsletters

Sign up now and get the best business technology insights direct to your inbox.