Paper Number

1447

Paper Type

Complete Research Paper

Abstract

Explainable Artificial Intelligence (XAI) aims at automatically generating explanations along AI decisions to assist users in interacting with AI systems. In the realm of user-centric XAI, research efforts focus on XAI approaches that specifically address users’ needs for interpretability. In the ultimate consequence, explanations must be individually tailored to each individual user’s needs. Against this background, we propose a novel approach to personalize automatically generated explanations along AI decisions. In a given context with given objectives of explanations, our XAI Personalization Approach automatically adjusts the hyperparameters of an XAI method for individual users. The approach processes individual user interaction data based on contextual bandits to increase the effectiveness of resulting explanations. We demonstrate the practical applicability of our approach and evaluate its efficacy in a user study with the task of image classification. Results suggest that explanations personalized by our approach are more effective than one-size-fits-all user-centric explanations.

Share

COinS
 
Jun 14th, 12:00 AM

Exploring XAI Users' Needs: A Novel Approach to Personalize Explanations Using Contextual Bandits

Explainable Artificial Intelligence (XAI) aims at automatically generating explanations along AI decisions to assist users in interacting with AI systems. In the realm of user-centric XAI, research efforts focus on XAI approaches that specifically address users’ needs for interpretability. In the ultimate consequence, explanations must be individually tailored to each individual user’s needs. Against this background, we propose a novel approach to personalize automatically generated explanations along AI decisions. In a given context with given objectives of explanations, our XAI Personalization Approach automatically adjusts the hyperparameters of an XAI method for individual users. The approach processes individual user interaction data based on contextual bandits to increase the effectiveness of resulting explanations. We demonstrate the practical applicability of our approach and evaluate its efficacy in a user study with the task of image classification. Results suggest that explanations personalized by our approach are more effective than one-size-fits-all user-centric explanations.

When commenting on articles, please be friendly, welcoming, respectful and abide by the AIS eLibrary Discussion Thread Code of Conduct posted here.