Paper Number
ICIS2025-2709
Paper Type
Short
Abstract
We investigate whether dialogue-based interactive explanations from interactive XAI foster appropriate reliance in human-AI decision making. Guided by dual-process theory and recent XAI literature, we conceptualize human-AI interaction through explanations as a conversational process that supports verification, trust calibration and deliberate engagement. Using a deception-detection task, we compare static visualizations and natural language dialogue-based explanation modalities to assess their impact on trust, confidence, and learned knowledge. Our expectation is that natural-language dialogue enhances users’ ability to accept correct AI advice and reject incorrect ones, leading to appropriate reliance on AI advice in human-AI decision making and improved human AI team performance. Our result will offer practical implications for system designers and policymakers seeking to promote transparent, accountable and cognitively accessible AI systems.
Recommended Citation
Lomo, Marvin Adjei Kojo and Singh, Rahul, "Interactive XAI and Appropriate Reliance: The Role of Conversations in Effective Human-AI Decision-Making" (2025). ICIS 2025 Proceedings. 35.
https://aisel.aisnet.org/icis2025/hti/hti/35
Interactive XAI and Appropriate Reliance: The Role of Conversations in Effective Human-AI Decision-Making
We investigate whether dialogue-based interactive explanations from interactive XAI foster appropriate reliance in human-AI decision making. Guided by dual-process theory and recent XAI literature, we conceptualize human-AI interaction through explanations as a conversational process that supports verification, trust calibration and deliberate engagement. Using a deception-detection task, we compare static visualizations and natural language dialogue-based explanation modalities to assess their impact on trust, confidence, and learned knowledge. Our expectation is that natural-language dialogue enhances users’ ability to accept correct AI advice and reject incorrect ones, leading to appropriate reliance on AI advice in human-AI decision making and improved human AI team performance. Our result will offer practical implications for system designers and policymakers seeking to promote transparent, accountable and cognitively accessible AI systems.
When commenting on articles, please be friendly, welcoming, respectful and abide by the AIS eLibrary Discussion Thread Code of Conduct posted here.
Comments
15-Interaction