Abstract

Generative Artificial Intelligence (GenAI), particularly Retrieval-Augmented Generation (RAG) systems, is rapidly transforming how organizations access information and support decision-making, embodying the shift towards more autonomous and interactive intelligent systems. While RAG enhances knowledge retrieval by integrating Large Language Models (LLMs) with information retrieval techniques, conventional implementations often rely on vector similarity search, which operates as a "black box." This lack of transparency hinders user trust, accountability, and the interpretability of AI-generated outputs, posing significant challenges, especially in knowledge-intensive service contexts like consulting where decisions carry high stakes. This research will explore an approach to address this critical explainability gap by integrating Knowledge Graphs (KGs) into the RAG architecture (KG-enhanced RAG). We will present preliminary findings from an ongoing longitudinal case study conducted within a consulting firm, examining the implementation and use of both traditional vector-based RAG and a KG-enhanced RAG system designed to centralize knowledge and support employee training and project work. The core argument is that KG-enhanced RAG, by utilizing structured graph-based querying mechanisms instead of opaque vector searches, significantly improves the interpretability and transparency of the retrieval process. Users can better understand why specific information was retrieved and used to generate a response, aligning with Explainable AI (XAI) principles. The research will outline the comparative analysis between the two RAG approaches observed in the case study, highlighting how KG-enhanced RAG fosters greater user trust, potentially reduces LLM hallucinations through better-contextualized information, and facilitates more effective human-AI collaboration in decision-making processes. Theoretically framed by XAI and Sociotechnical Systems (STS) perspectives, this research contributes to studies about Generative AI in Information and Service Systems by investigating how designing for explainability influences the transformation of IS into more trusted, autonomous agents and impacts value co-creation in AI-augmented service environments. The talk aims to share these initial insights, discuss the practical implications for developing more reliable and interpretable GenAI applications in service systems, and solicit valuable feedback from the audience regarding the challenges, benefits, and future research directions for explainable RAG systems in organizational settings.

Comments

tpp1243

Share

COinS