Abstract

Fairness is an important aspect for individuals and teams. This also applies for human-robot interaction (HRI). Especially if intelligent robots provide services to multiple humans, humans may feel treated unfairly by robots. Most work in this area deals with the aspects of fair algorithms, task allocation and decision support. This work focuses on a different, yet little explored perspective, which looks at fairness in HRI from a human-centered perspective in human-robot teams. We present an experiment in which a service robot was responsible for distributing resources among competing team members. We investigated how different strategies of distribution influence the perceived fairness and the perception of the robot. Our study shows that humans might perceive technically efficient algorithms as unfair, especially if humans personally experience negative consequences. This also had negative impact on human perception of the robot, which should be considered in the design of future robots.

Share

COinS