Paper Type

ERF

Abstract

AI hiring systems enhance efficiency but often reinforce biases against marginalized groups. To mitigate overreliance on AI, it is essential to improve users’ bias detection and reduce complacency potential—the tendency to delegate tasks and diminish monitoring. This study examines how AI input explanation richness (rich vs. lean) and bias forewarning interact to influence bias detection and complacency potential in AI hiring. The research builds on the complacency framework and signal detection theory. A 2×2 between-subjects experiment will be conducted. The findings aim to inform the design of AI systems that better mitigate overreliance in AI hiring contexts.

Paper Number

1634

Author Connect URL

https://authorconnect.aisnet.org/conferences/AMCIS2025/papers/1634

Comments

SIGADIT

Author Connect Link

Share

COinS
 
Aug 15th, 12:00 AM

How Input Explanation and Bias Forewarning Shape Users’ Overreliance on AI Hiring Systems

AI hiring systems enhance efficiency but often reinforce biases against marginalized groups. To mitigate overreliance on AI, it is essential to improve users’ bias detection and reduce complacency potential—the tendency to delegate tasks and diminish monitoring. This study examines how AI input explanation richness (rich vs. lean) and bias forewarning interact to influence bias detection and complacency potential in AI hiring. The research builds on the complacency framework and signal detection theory. A 2×2 between-subjects experiment will be conducted. The findings aim to inform the design of AI systems that better mitigate overreliance in AI hiring contexts.

When commenting on articles, please be friendly, welcoming, respectful and abide by the AIS eLibrary Discussion Thread Code of Conduct posted here.