ACIS 2024 Proceedings
Abstract
As artificial intelligence (AI) increasingly supports human decision-making across various domains, from hiring processes to legal judgments, understanding how to optimize human-AI collaboration is crucial for developing trustworthy and effective systems. This study explores the impact of different human-AI collaboration modes on collaboration effectiveness and users’ reliance on AI recommendations, particularly in the context of detecting stereotypical biases. Drawing on the literature of conjoined agency between humans and AI, this study differentiates between three distinct modes of human-AI collaboration— “human initiates task and AI assists,” “AI screens all tasks and human assists,” and “AI automation with human oversight”—and examines their varying effects on bias detection and human acceptance of AI recommendations. Additionally, we test the moderating role of equivocality in decision-making. Our study employs a 4x2 experimental design, including a fully manual control group, to test these hypotheses. This research contributes to the theoretical understanding of human-AI interaction and provides practical insights for designing more equitable and trusted AI systems.
Recommended Citation
Mou, Danlei; Cui, Tingru; Holtta-Otto, Katja; Du, Bo; and Tong, Jiawei, "Effective Stereotypical Bias Detection: The Impact of Human-AI Collaboration Modes on Human Reliance on AI Recommendation" (2024). ACIS 2024 Proceedings. 146.
https://aisel.aisnet.org/acis2024/146