Paper Number

ICIS2025-1709

Paper Type

Short

Abstract

The increasing reliance on artificial intelligence (AI) for decision-making highlights the importance of explanations that users can trust and understand. While technical advances in explainable AI (XAI) have progressed, less attention has been given to how explanation framing aligns with users’ cognitive styles. Drawing on construal-level theory (CLT), this study proposes a construal-congruence perspective, suggesting that matching user’s cognitive construal (concrete vs. abstract) with explanation framing (how vs. why) enhances perceived explanation quality. We further examine how contextual cues—familiarity with the AI agent and endowment of the data—moderate this relationship. Through a series of online experiments, we investigate how explanation quality, conceptualized as informativeness, usefulness, and persuasiveness, is influenced by cognitive and contextual alignment. This research contributes to XAI and information systems literature by advancing a user-centered and context-sensitive explanation framework and offers practical insights for designing more trustworthy AI systems.

Comments

16-UserBehavior

Share

COinS
 
Dec 14th, 12:00 AM

Bridging Minds and Machines: A Construal-Congruence Perspective on Explainable AI

The increasing reliance on artificial intelligence (AI) for decision-making highlights the importance of explanations that users can trust and understand. While technical advances in explainable AI (XAI) have progressed, less attention has been given to how explanation framing aligns with users’ cognitive styles. Drawing on construal-level theory (CLT), this study proposes a construal-congruence perspective, suggesting that matching user’s cognitive construal (concrete vs. abstract) with explanation framing (how vs. why) enhances perceived explanation quality. We further examine how contextual cues—familiarity with the AI agent and endowment of the data—moderate this relationship. Through a series of online experiments, we investigate how explanation quality, conceptualized as informativeness, usefulness, and persuasiveness, is influenced by cognitive and contextual alignment. This research contributes to XAI and information systems literature by advancing a user-centered and context-sensitive explanation framework and offers practical insights for designing more trustworthy AI systems.

When commenting on articles, please be friendly, welcoming, respectful and abide by the AIS eLibrary Discussion Thread Code of Conduct posted here.