Paper Number
ECIS2025-1458
Paper Type
SP
Abstract
Artificial Intelligence (AI) is increasingly used to augment human decision-making. However, especially in high-stakes domains, the integration of AI requires human oversight to ensure trustworthy use. To address this challenge, emerging research on Explainable AI (XAI) focuses on developing and investigating methods to generate explanations for AI outcomes. Yet, current approaches often yield limited explanations, neglecting the various sources of uncertainty that strongly influence AI-augmented decision-making. This paper presents a first step to establishing a foundation for future research in uncertainty-aware XAI. By applying the Extended Taxonomy Design Process, we aim to develop an integrated, hierarchical taxonomy to structure the key characteristics of uncertainty-aware XAI. Through this approach, we identify four primary sources of uncertainty: data uncertainty, AI model uncertainty, XAI method uncertainty, and human uncertainty. Furthermore, we propose a preliminary taxonomy as an initial foundational framework for the future design and evaluation of uncertainty- aware XAI.
Recommended Citation
Förster, Maximilian; Hagn, Michael; Hambauer, Nico; Jaki, Paula Kathrin Viktoria; Obermeier, Andreas Alexander; Pinski, Marc; Schauer, Andreas; and Schiller, Alexander, "A Taxonomy for Uncertainty-Aware Explainable AI" (2025). ECIS 2025 Proceedings. 5.
https://aisel.aisnet.org/ecis2025/ai_org/ai_org/5
A Taxonomy for Uncertainty-Aware Explainable AI
Artificial Intelligence (AI) is increasingly used to augment human decision-making. However, especially in high-stakes domains, the integration of AI requires human oversight to ensure trustworthy use. To address this challenge, emerging research on Explainable AI (XAI) focuses on developing and investigating methods to generate explanations for AI outcomes. Yet, current approaches often yield limited explanations, neglecting the various sources of uncertainty that strongly influence AI-augmented decision-making. This paper presents a first step to establishing a foundation for future research in uncertainty-aware XAI. By applying the Extended Taxonomy Design Process, we aim to develop an integrated, hierarchical taxonomy to structure the key characteristics of uncertainty-aware XAI. Through this approach, we identify four primary sources of uncertainty: data uncertainty, AI model uncertainty, XAI method uncertainty, and human uncertainty. Furthermore, we propose a preliminary taxonomy as an initial foundational framework for the future design and evaluation of uncertainty- aware XAI.
When commenting on articles, please be friendly, welcoming, respectful and abide by the AIS eLibrary Discussion Thread Code of Conduct posted here.