Human Computer / Robot Interaction
Loading...
Paper Number
1990
Paper Type
Completed
Description
Biases in Artificial Intelligence (AI) can reinforce social inequality. Increasing transparency of AI systems through explanations can help to avoid the negative consequences of those biases. However, little is known about how users evaluate explanations of biased AI systems. Thus, we apply the Psychological Contract Violation Theory to investigate the implications of a gender-biased AI system on user trust. We allocated 339 participants into three experimental groups, each with a different loan forecasting AI system version: explainable gender-biased, explainable neutral, and non-explainable AI system. We demonstrate that only users with moderate to high general awareness of gender stereotypes in society, i.e., stigma consciousness, perceive the gender-biased AI system as not trustworthy. However, users with low stigma consciousness perceive the gender-biased AI system as trustworthy as it is more transparent than a system without explanations. Our findings show that AI biases can reinforce social inequality if they match with human stereotypes.
Recommended Citation
Jussupow, Ekaterina; Meza Martínez, Miguel Angel; Maedche, Alexander; and Heinzl, Armin, "Is This System Biased? – How Users React to Gender Bias in an Explainable AI System" (2021). ICIS 2021 Proceedings. 11.
https://aisel.aisnet.org/icis2021/hci_robot/hci_robot/11
Is This System Biased? – How Users React to Gender Bias in an Explainable AI System
Biases in Artificial Intelligence (AI) can reinforce social inequality. Increasing transparency of AI systems through explanations can help to avoid the negative consequences of those biases. However, little is known about how users evaluate explanations of biased AI systems. Thus, we apply the Psychological Contract Violation Theory to investigate the implications of a gender-biased AI system on user trust. We allocated 339 participants into three experimental groups, each with a different loan forecasting AI system version: explainable gender-biased, explainable neutral, and non-explainable AI system. We demonstrate that only users with moderate to high general awareness of gender stereotypes in society, i.e., stigma consciousness, perceive the gender-biased AI system as not trustworthy. However, users with low stigma consciousness perceive the gender-biased AI system as trustworthy as it is more transparent than a system without explanations. Our findings show that AI biases can reinforce social inequality if they match with human stereotypes.
When commenting on articles, please be friendly, welcoming, respectful and abide by the AIS eLibrary Discussion Thread Code of Conduct posted here.
Comments
10-HCI