Paper Number

ICIS2025-1006

Paper Type

Complete

Abstract

Artificial Intelligence (AI) systems increasingly support decision-making in high-stakes domains, where appropriate reliance on AI recommendations is vital. Yet many AI systems neglect to convey prediction uncertainty, risking over-reliance or under-reliance on their outputs. This study investigates how uncertainty-aware explanations influence users’ reliance on AI systems through three cognitive mechanisms: mental model calibration, cognitive load, and perceived transparency. In a between-subject online experiment with 101 healthcare professionals and advanced medical students, participants received diagnostic AI recommendations accompanied by either no, moderate, or high levels of uncertainty information. Both moderate and high uncertainty improved mental model calibration and cognitive load, while high uncertainty information did not increase perceived transparency. Mediation analysis revealed that mental model calibration and cognitive load significantly mediated effects on appropriate reliance, while perceived transparency did not. These findings suggest that moderate, cognitively optimized uncertainty communication best supports appropriate reliance in critical decision-making contexts.

Comments

15-Interaction

Share

COinS
 
Dec 14th, 12:00 AM

Uncertainty-Awareness in AI-Augmented Decision-Making: Explaining Appropriate Reliance through Cognitive Mechanisms

Artificial Intelligence (AI) systems increasingly support decision-making in high-stakes domains, where appropriate reliance on AI recommendations is vital. Yet many AI systems neglect to convey prediction uncertainty, risking over-reliance or under-reliance on their outputs. This study investigates how uncertainty-aware explanations influence users’ reliance on AI systems through three cognitive mechanisms: mental model calibration, cognitive load, and perceived transparency. In a between-subject online experiment with 101 healthcare professionals and advanced medical students, participants received diagnostic AI recommendations accompanied by either no, moderate, or high levels of uncertainty information. Both moderate and high uncertainty improved mental model calibration and cognitive load, while high uncertainty information did not increase perceived transparency. Mediation analysis revealed that mental model calibration and cognitive load significantly mediated effects on appropriate reliance, while perceived transparency did not. These findings suggest that moderate, cognitively optimized uncertainty communication best supports appropriate reliance in critical decision-making contexts.

When commenting on articles, please be friendly, welcoming, respectful and abide by the AIS eLibrary Discussion Thread Code of Conduct posted here.