Paper Number
ICIS2025-1709
Paper Type
Short
Abstract
The increasing reliance on artificial intelligence (AI) for decision-making highlights the importance of explanations that users can trust and understand. While technical advances in explainable AI (XAI) have progressed, less attention has been given to how explanation framing aligns with users’ cognitive styles. Drawing on construal-level theory (CLT), this study proposes a construal-congruence perspective, suggesting that matching user’s cognitive construal (concrete vs. abstract) with explanation framing (how vs. why) enhances perceived explanation quality. We further examine how contextual cues—familiarity with the AI agent and endowment of the data—moderate this relationship. Through a series of online experiments, we investigate how explanation quality, conceptualized as informativeness, usefulness, and persuasiveness, is influenced by cognitive and contextual alignment. This research contributes to XAI and information systems literature by advancing a user-centered and context-sensitive explanation framework and offers practical insights for designing more trustworthy AI systems.
Recommended Citation
Mahmud, Hasan, "Bridging Minds and Machines: A Construal-Congruence Perspective on Explainable AI" (2025). ICIS 2025 Proceedings. 17.
https://aisel.aisnet.org/icis2025/user_behav/user_behav/17
Bridging Minds and Machines: A Construal-Congruence Perspective on Explainable AI
The increasing reliance on artificial intelligence (AI) for decision-making highlights the importance of explanations that users can trust and understand. While technical advances in explainable AI (XAI) have progressed, less attention has been given to how explanation framing aligns with users’ cognitive styles. Drawing on construal-level theory (CLT), this study proposes a construal-congruence perspective, suggesting that matching user’s cognitive construal (concrete vs. abstract) with explanation framing (how vs. why) enhances perceived explanation quality. We further examine how contextual cues—familiarity with the AI agent and endowment of the data—moderate this relationship. Through a series of online experiments, we investigate how explanation quality, conceptualized as informativeness, usefulness, and persuasiveness, is influenced by cognitive and contextual alignment. This research contributes to XAI and information systems literature by advancing a user-centered and context-sensitive explanation framework and offers practical insights for designing more trustworthy AI systems.
When commenting on articles, please be friendly, welcoming, respectful and abide by the AIS eLibrary Discussion Thread Code of Conduct posted here.
Comments
16-UserBehavior