Paper Number
ICIS2025-2429
Paper Type
Complete
Abstract
As AI systems are increasingly deployed in high-stakes domains such as healthcare and finance, concerns about their opacity and inherent uncertainty pose significant challenges to their trustworthiness. While Explainable AI aims to enhance model interpretability, it often overlooks the critical task of communicating uncertainty. This study investigates how integrating explanations with uncertainty quantification affects human-AI collaboration. In an online experiment involving a credit score prediction task, we systematically compare four treatment conditions: (i) AI recommendations alone, (ii) with explanations, (iii) with uncertainty information, and (iv) with both explanations and uncertainty information. We find that only the combined presentation of explanations and uncertainty information significantly improves decision accuracy, user trust, and produces positive spillover effects. By demonstrating the synergistic value of combining these approaches, this study informs the design of trustworthy AI systems for high-stakes applications and supports the socially beneficial integration of digital technologies in the age of AI.
Recommended Citation
Schauer, Andreas and Schnurr, Daniel, "Fostering Trustworthy Human-AI Collaboration through Explainable AI and Uncertainty Quantification" (2025). ICIS 2025 Proceedings. 6.
https://aisel.aisnet.org/icis2025/conf_theme/conf_theme/6
Fostering Trustworthy Human-AI Collaboration through Explainable AI and Uncertainty Quantification
As AI systems are increasingly deployed in high-stakes domains such as healthcare and finance, concerns about their opacity and inherent uncertainty pose significant challenges to their trustworthiness. While Explainable AI aims to enhance model interpretability, it often overlooks the critical task of communicating uncertainty. This study investigates how integrating explanations with uncertainty quantification affects human-AI collaboration. In an online experiment involving a credit score prediction task, we systematically compare four treatment conditions: (i) AI recommendations alone, (ii) with explanations, (iii) with uncertainty information, and (iv) with both explanations and uncertainty information. We find that only the combined presentation of explanations and uncertainty information significantly improves decision accuracy, user trust, and produces positive spillover effects. By demonstrating the synergistic value of combining these approaches, this study informs the design of trustworthy AI systems for high-stakes applications and supports the socially beneficial integration of digital technologies in the age of AI.
When commenting on articles, please be friendly, welcoming, respectful and abide by the AIS eLibrary Discussion Thread Code of Conduct posted here.
Comments
01-ConferenceTheme