Abstract
Effective uncertainty communication is a key assistive tool in human-AI systems, reducing both overreliance and unwarranted aversion by clarifying how reliable predictions are. AI predictions generally involve two broad types of uncertainty: aleatoric, arising from inherent randomness in the data, and epistemic, stemming from limited or biased data. This study investigates how these uncertainty types differentially affect user trust to inform communication strategies that better align confidence with model reliability. Using a two-stage experiment with 240 participants performing a health insurance cost prediction task. Results show that both higher epistemic and aleatoric uncertainty decrease trust, while lower uncertainty increases trust relative to AI-only predictions. Aleatoric uncertainty exerted the stronger effect. Subjective measures also showed modest gains in perceived trust, satisfaction, and usefulness when uncertainty was communicated. These findings underscore the value of transparent uncertainty communication for improving human–AI collaboration in domains such as healthcare and finance. Future work will examine alternative approaches to communication of two types of uncertainty.
Recommended Citation
Asrzad, Amir; Tripathi, Sambit; and Li, Xiao-Bai, "Layers of Uncertainty: How Aleatoric and Epistemic Cues Shape Trust in Human–AI Collaboration" (2025). Proceedings of the 2025 Pre-ICIS SIGDSA Symposium. 69.
https://aisel.aisnet.org/sigdsa2025/69