To prepare for
Judgement Day a future that involves communicating with our artificially intelligent friends, psychologists at the University of Washington conducted a study to see what that might look like.
In particular, researchers were concerned with whether people would be willing to hold robots morally accountable for their actions. And why, pray tell, would we need to do that? “We’re moving toward a world where robots will be capable of harming humans,” explains associate professor Peter Kahn. Okay then! Thanks for the heads up?
Rather than go all full Stockholm Syndrome, Mr. Kahn’s team devised a scavenger hunt with a “humanlike” robot named Robovie. Only our man Robovie was actually being controlled by a researcher concealed in another room. Participants chit-chatted with Robovie before they were instructed to find a seven list of objects in the room in order to win $20. Although all 40 participants completed the task, Robovie insisted they had only found five–and mighty condescendingly, we might add.
The majority of respondents stood up for themselves. You can watch a vide of one such tense interaction here. Sixty-five percent insisted it was Robovie’s fault, some argued with the hunk of metal and others accused it of being a liar.
Researchers frame their findings as a good thing since “it is likely that many people will hold a humanoid robot as partially accountable for a harm that it causes.” We dunno about that. Shouldn’t we be more concerned about how the robots react when we tell them they’re wrong?