Abstract

Background: Explainable AI (XAI) aims to bring transparency to opaque algorithmic processes, fostering trust in AI-based applications used across diverse sectors. This systematic review examines various XAI challenges and advancements in education. Our objective is to address how XAI can contribute to the educational field.

Method: Following PRISMA guidelines, we systematically searched major academic databases for peer-reviewed studies on XAI in education. After removing duplicates and applying inclusion/exclusion criteria, 35 articles were retained. We then performed a thematic analysis to extract and categorise reported challenges and reported advancements.

Results: Our findings reveal 95 challenges and five advancements in the realm of XAI. These challenges are categorised using thematic analysis into seven groups: explainability, ethical, technical, human-computer interaction (HCI), trustworthiness, policy and guideline, and others. This analysis deepens our understanding of the implications of XAI in education. Notably, we found a lack of standardization for performing XAI in educational settings, leading to confusion, particularly regarding ethics, trustworthiness, technicalities, and explainability, which often overlap and vary. In terms of advancements in XAI for education, few articles introduced novel approaches to using XAI in educational settings. This limited number of studies presents both a challenge and an opportunity for the research community to fill this gap with further advancements

Conclusion: This review uncovered the challenges of integrating XAI methods into educational systems. Additionally, through this review, we identified recent advancements developed by researchers in integrating XAI methods into educational AI applications. These advancements utilise new techniques to enhance the trustworthiness of educational AI applications and deliver more accurate results.

Share

COinS