Paper Number

ECIS2025-1509

Paper Type

CRP

Abstract

AI-assisted decision-making often underperforms due to users' difficulties in effectively interacting with AI-based systems. This study investigates how example-based explanations—specifically factual and counterfactual explanations—impact users' decision-making performance and their tendency to overrule algorithmic advice in a sales forecasting task. We also examine the mediating role of cognitive load. By analyzing 1330 forecasts made in an online lab experiment, we find that factual explanations significantly enhance forecasting performance by enabling users to more effectively overrule algorithmic advice. While counterfactual explanations also result in performance gains, the increase is smaller and operates primarily through reduced deviation from AI advice due to cognitive overload. Our findings suggest that factual explanations align well with human cognitive processes, facilitating better decision outcomes, while counterfactuals may overwhelm users cognitively. This study contributes to a deeper understanding of explainable AI design in decision-making contexts, emphasizing the importance of aligning explanations with users' cognitive capacities.

Author Connect URL

https://authorconnect.aisnet.org/conferences/ECIS2025/papers/ECIS2025-1509

Author Connect Link

Share

COinS
 
Jun 18th, 12:00 AM

Improving AI-Assisted Decision-Making: Insights into Example-Based Explanations and Cognitive Load in Sales Forecasting

AI-assisted decision-making often underperforms due to users' difficulties in effectively interacting with AI-based systems. This study investigates how example-based explanations—specifically factual and counterfactual explanations—impact users' decision-making performance and their tendency to overrule algorithmic advice in a sales forecasting task. We also examine the mediating role of cognitive load. By analyzing 1330 forecasts made in an online lab experiment, we find that factual explanations significantly enhance forecasting performance by enabling users to more effectively overrule algorithmic advice. While counterfactual explanations also result in performance gains, the increase is smaller and operates primarily through reduced deviation from AI advice due to cognitive overload. Our findings suggest that factual explanations align well with human cognitive processes, facilitating better decision outcomes, while counterfactuals may overwhelm users cognitively. This study contributes to a deeper understanding of explainable AI design in decision-making contexts, emphasizing the importance of aligning explanations with users' cognitive capacities.

When commenting on articles, please be friendly, welcoming, respectful and abide by the AIS eLibrary Discussion Thread Code of Conduct posted here.