Dishonest behavior is an issue in human-human interactions and the same might happen in human-robot interactions. To ascertain people’s perceptions of dishonesty, we asked participants to evaluate five different scenarios where someone was being dishonest towards a human or a robot, but we varied the level of autonomy the robot presented. We asked them how guilty they would feel by being dishonest towards a robot, and why do they think people would be dishonest with robots. We see that, regardless of being a human or the autonomy the robot presented, people always evaluated as being wrong to be dishonest. They reported feeling low guilt with a robot. And they expressed that people will be dishonest mostly because of lack of capabilities in the robot to prevent dishonesty, absence of presence, and a human tendency for dishonesty. These results bring implications for the developments of autonomous robots in the future.
|Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
|12th International Conference on Social Robotics, ICSR 2020
|14/11/20 → 18/11/20