Paper Number

ICIS2025-1198

Paper Type

Short

Abstract

AI-powered learning platforms promise personalized upskilling, yet often face employee distrust and low uptake. Grounded in trust, engagement, and explainable AI (XAI) literature, this research-in-progress examines how alternative explanation designs influence users’ trust and behavioral engagement with a corporate learning recommender. Using a Design Science Research process, we are developing a hybrid recommendation engine and an interface that provides feature-based and counterfactual explanations. A two-week field experiment with about 30 knowledge-workers (two explainable vs. one baseline conditions) is planned to measure post-study trust, enrolment and completion rates, and collect qualitative feedback. Expected contributions include empirically validated design principles for providing explanations, deeper insight into the trust, engagement nexus in workplace learning, and practitioner guidance for explainable, employee-centric AI deployment. By extending XAI scholarship to corporate Learning & Development, the study addresses an identified research gap and supports responsible AI adoption in organizations.

Comments

16-UserBehavior

Share

COinS
 
Dec 14th, 12:00 AM

Fostering Trust and Engagement in AI-Powered Corporate Learning: Investigating the Role of Explainability

AI-powered learning platforms promise personalized upskilling, yet often face employee distrust and low uptake. Grounded in trust, engagement, and explainable AI (XAI) literature, this research-in-progress examines how alternative explanation designs influence users’ trust and behavioral engagement with a corporate learning recommender. Using a Design Science Research process, we are developing a hybrid recommendation engine and an interface that provides feature-based and counterfactual explanations. A two-week field experiment with about 30 knowledge-workers (two explainable vs. one baseline conditions) is planned to measure post-study trust, enrolment and completion rates, and collect qualitative feedback. Expected contributions include empirically validated design principles for providing explanations, deeper insight into the trust, engagement nexus in workplace learning, and practitioner guidance for explainable, employee-centric AI deployment. By extending XAI scholarship to corporate Learning & Development, the study addresses an identified research gap and supports responsible AI adoption in organizations.

When commenting on articles, please be friendly, welcoming, respectful and abide by the AIS eLibrary Discussion Thread Code of Conduct posted here.