Event Title
An Investigation to reduce Overreliance on Explainable AI (XAI) in light of Two System Theory
Loading...
Paper Type
ERF
Description
As technology is evolving, there is a rise in the use of AI systems. The increased use of AI systems has revealed issues of gender and racial biases. To address these issues, explainable AI (XAI) is introduced, but the use of XAI has triggered various kinds of biases leading to issues such as overreliance. In this study, we seek to devise interventions to mitigate the issue of overreliance on AI by better understanding cognitive biases and acknowledging that different users have different cognitive abilities, and we need to be mindful of that when we design XAI systems. We will conduct multiple experiments using the recidivism dataset collected by ProPublica and to develop a better understanding of and solutions to mitigate the issue of overreliance. The findings from this research will allow us to design XAI systems better, improving user trust in AI and further improving AI adoption.
Paper Number
1251
Recommended Citation
Ur Rehman, Mati and Chen, Rui, "An Investigation to reduce Overreliance on Explainable AI (XAI) in light of Two System Theory" (2023). AMCIS 2023 Proceedings. 1.
https://aisel.aisnet.org/amcis2023/sig_core/sig_core/1
An Investigation to reduce Overreliance on Explainable AI (XAI) in light of Two System Theory
As technology is evolving, there is a rise in the use of AI systems. The increased use of AI systems has revealed issues of gender and racial biases. To address these issues, explainable AI (XAI) is introduced, but the use of XAI has triggered various kinds of biases leading to issues such as overreliance. In this study, we seek to devise interventions to mitigate the issue of overreliance on AI by better understanding cognitive biases and acknowledging that different users have different cognitive abilities, and we need to be mindful of that when we design XAI systems. We will conduct multiple experiments using the recidivism dataset collected by ProPublica and to develop a better understanding of and solutions to mitigate the issue of overreliance. The findings from this research will allow us to design XAI systems better, improving user trust in AI and further improving AI adoption.
When commenting on articles, please be friendly, welcoming, respectful and abide by the AIS eLibrary Discussion Thread Code of Conduct posted here.
Comments
SIG CORE