Explainable Artificial Intelligence (XAI) is an aspiring research field addressing the problem that users of AI do not trust AI systems that act as black boxes. However, XAI research to date is often criticized for not putting the user in the center of attention. We develop a generic and transferable human-based study to evaluate explanations generated by XAI methods from the users’ perspective. The design of the study is informed by insights from social sciences into how humans construct explanations. We conduct the study with 164 participants evaluating contrastive explanations generated by representative XAI methods. Our findings reveal characteristics of explanations users appreciate in the context of XAI. We find concreteness, coherence, and relevance to be decisive. These findings provide guidance for the design and development of XAI methods.
Förster, Maximilian; Klier, Mathias; Kluge, Kilian; and Sigler, Irina, "Evaluating Explainable Artifical Intelligence – What Users Really Appreciate" (2020). In Proceedings of the 28th European Conference on Information Systems (ECIS), An Online AIS Conference, June 15-17, 2020.
When commenting on articles, please be friendly, welcoming, respectful and abide by the AIS eLibrary Discussion Thread Code of Conduct posted here.