Paper Number
ICIS2025-2415
Paper Type
Short
Abstract
As organizations increasingly use machine learning (ML) systems in decision-making, concerns about algorithmic bias and fairness have intensified. Regulatory frameworks like the EU AI Act now require oversight, but technical fixes alone are insufficient. Human-AI collaboration is essential to mitigate biased outcomes. This study investigates how individual differences—specifically AI literacy and Bias Blind Spot—influence reliance on biased algorithmic advice in a human resources (HR) context. In a pre-registered pilot experiment, participants acted as HR managers making promotion decisions, supported by either a biased or unbiased ML algorithm. Results show that higher AI literacy reduced reliance on biased algorithmic advice, suggesting a moderating effect. Unexpectedly, participants with high Bias Blind Spot scores were more likely to follow biased advice, contradicting theoretical expectations. These findings offer early insights into how personal traits shape human-algorithm interaction and underscore the importance of user education and awareness in designing fairer decision-support systems.
Recommended Citation
Heimbach, Irina; Ruthsatz, Vera; and Mueller, Oliver, "The Effects of Judge’s AI Literacy and Bias Blind Spot on the Utilization of Biased Algorithmic Advice" (2025). ICIS 2025 Proceedings. 31.
https://aisel.aisnet.org/icis2025/hti/hti/31
The Effects of Judge’s AI Literacy and Bias Blind Spot on the Utilization of Biased Algorithmic Advice
As organizations increasingly use machine learning (ML) systems in decision-making, concerns about algorithmic bias and fairness have intensified. Regulatory frameworks like the EU AI Act now require oversight, but technical fixes alone are insufficient. Human-AI collaboration is essential to mitigate biased outcomes. This study investigates how individual differences—specifically AI literacy and Bias Blind Spot—influence reliance on biased algorithmic advice in a human resources (HR) context. In a pre-registered pilot experiment, participants acted as HR managers making promotion decisions, supported by either a biased or unbiased ML algorithm. Results show that higher AI literacy reduced reliance on biased algorithmic advice, suggesting a moderating effect. Unexpectedly, participants with high Bias Blind Spot scores were more likely to follow biased advice, contradicting theoretical expectations. These findings offer early insights into how personal traits shape human-algorithm interaction and underscore the importance of user education and awareness in designing fairer decision-support systems.
When commenting on articles, please be friendly, welcoming, respectful and abide by the AIS eLibrary Discussion Thread Code of Conduct posted here.
Comments
15-Interaction