Abstract

In the evolving domain of natural language processing (NLP), the emergence of Large Language Models (LLMs) like Generative Pre-trained Transformer (GPT), Bidirectional Encoder Representations from Transformers (BERT), and their advanced successors has markedly transformed text generation and comprehension. These models, leveraging deep learning and transformer architectures, have significantly enhanced NLP capabilities. Concurrently, the importance of Explainable AI (XAI) in elucidating the complexities of AI models, particularly LLMs, has gained prominence, especially for decision-making processes. This research-in-progress paper, structured under the PRISMA guidelines, provides initial insights into the confluence of LLMs and XAI. It explores two critical inquiries: (1) Which XAI methods are utilized to understand LLMs? and (2) How are LLMs leveraged for their explanatory capabilities? Publications from the four essential databases—Web of Science, Scopus, Arxiv, and IEEE Explore—are processed for this systematic review. Initially comprising 96 articles, the selection was refined to 21, focusing on studies that integrate LLMs and XAI with empirical findings. This review highlights current XAI research primarily relies on agnostic, local, quantitative methods like SHAP and LIME, often combined, to comprehend AI decision-making. LLMs like ChatGPT excel in providing explanation of XAI outputs, revealing a gap in direct LLM utilization as explainer methods.

Share

COinS