Abstract

In recent years, there has been rapid growth in the deployment of artificial intelligence-based systems across organizational settings, transforming how decisions are made, tasks are automated, and human–machine collaboration is structured, (Herath Pathirannehelage et al., 2024). Powered by advances in datafication and computational power, this mainstream adoption of AI has intensified interest in explanations and explainability as both technical and socio-cognitive imperatives (Waardenburg & Huysman, 2022). Artificial intelligence (AI) systems have proliferated, garnering significant interest among information systems (IS) scholars and while numerous studies explore how explainability and explanations in the context of artificial intelligence affect users’ behavior, there is increasing recognition that effective explainability must address not only technical implementation but also the cognitive needs and contextual expectations of end users. (Pumplun et al., 2023). Current research in information systems draws on a diverse range of theoretical perspectives, spanning cognitive psychology, organizational learning, and human–computer interaction, to examine how users interpret, evaluate, and act upon AI-generated explanations. However, this theoretical diversity has resulted in a fragmented body of work, lacking a coherent foundation to understand how explainability interacts with domain knowledge to co-shape human–AI interaction. Subsequently, a key challenge exists in reconciling these fragmented perspectives into coherent theoretical foundations that captures how explanations are constructed, interpreted, and situated within domain-specific contexts. Without such frameworks, it remains difficult to theorize how AI explanations shape sense-making, learning, and decision-making in organizational practice. To address this gap, this scoping review aims to synthesize and critically reflect on the existing research on explainable AI. Drawing from this synthesis, the review organizes prior research into six thematic clusters: trust, explainability, cognitive processing, organizational agility, ethical design, and algorithmic management. Each cluster surfaces recurring theoretical lenses and antecedents that have been frequently applied within its domain collectively offering a theory-informed map to support context-sensitive research of explainable AI systems. This mapping provides a foundation for guiding future research directions and critically reflecting on how explainability is conceptualized in relation to domain knowledge and human–AI interaction within organizational contexts. References Herath Pathirannehelage, S., Shrestha, Y. R., & von Krogh, G. (2024). Design principles for artificial intelligence-augmented decision making: An action design research study. European Journal of Information Systems, 34(2), 207–229. https://doi.org/10.1080/0960085X.2024.2330402 Pumplun, L., Peters, F., Gawlitza, J. F., & Buxmann, P. (2023). Bringing Machine Learning Systems into Clinical Practice: A Design Science Approach to Explainable Machine Learning-Based Clinical Decision Support Systems. Journal of the Association for Information Systems, 24(4), 953–979. https://doi.org/10.17705/1jais.00820 Waardenburg, L., & Huysman, M. (2022). From coexistence to co-creation: Blurring boundaries in the age of AI. Information and Organization, 32(4), 100432. https://doi.org/10.1016/j.infoandorg.2022.100432

Comments

tpp1363

Share

COinS