Paper Type
ERF
Abstract
AI hiring systems enhance efficiency but often reinforce biases against marginalized groups. To mitigate overreliance on AI, it is essential to improve users’ bias detection and reduce complacency potential—the tendency to delegate tasks and diminish monitoring. This study examines how AI input explanation richness (rich vs. lean) and bias forewarning interact to influence bias detection and complacency potential in AI hiring. The research builds on the complacency framework and signal detection theory. A 2×2 between-subjects experiment will be conducted. The findings aim to inform the design of AI systems that better mitigate overreliance in AI hiring contexts.
Paper Number
1634
Recommended Citation
GAO, Jianing; Xu, David (Jingjun); and Liu, Ben, "How Input Explanation and Bias Forewarning Shape Users’ Overreliance on AI Hiring Systems" (2025). AMCIS 2025 Proceedings. 27.
https://aisel.aisnet.org/amcis2025/sigadit/sigadit/27
How Input Explanation and Bias Forewarning Shape Users’ Overreliance on AI Hiring Systems
AI hiring systems enhance efficiency but often reinforce biases against marginalized groups. To mitigate overreliance on AI, it is essential to improve users’ bias detection and reduce complacency potential—the tendency to delegate tasks and diminish monitoring. This study examines how AI input explanation richness (rich vs. lean) and bias forewarning interact to influence bias detection and complacency potential in AI hiring. The research builds on the complacency framework and signal detection theory. A 2×2 between-subjects experiment will be conducted. The findings aim to inform the design of AI systems that better mitigate overreliance in AI hiring contexts.
When commenting on articles, please be friendly, welcoming, respectful and abide by the AIS eLibrary Discussion Thread Code of Conduct posted here.
Comments
SIGADIT