Abstract

Artificial Intelligence (AI) bears the potential to inform human decision-making in a large variety of domains. However, its “black box” character poses an obstacle to human agency in interaction with AI-based decision support. A possible solution comes from the research field of Explainable AI (XAI), which generates explanations that reveal AI’s functioning to users. Our research on XAI focuses on understanding the immediate and prolonged effect of XAI-based decision support on task performance. To this end, we conducted a randomized between-subjects online experiment with 289 participants performing the task of image classification. We find that explanations along AI decisions boost the positive effect of AI-based decision support on task performance during interaction. Furthermore, explanations can counterbalance the potential negative effect on prolonged task performance, which manifests after AI-based decision support is being withdrawn. Our findings contribute to understanding the impact of XAI on the outcome of human-AI interaction.

Paper Number

208

Comments

Track 5: Data Science & Business Analytics

Share

COinS