Abstract

The rapid development of Artificial Intelligence (AI) has brought about significant changes in various industries. However, the opaqueness of AI systems leads to blind trust, distrust, or even avoidance of the systems altogether. Against this background, counterfactual explanation methods generate human-understandable explanations of how an AI system arrives at a given output. However, the thorough integration of categorical variables in the generation of counterfactual explanations remains a challenging research problem. In this paper, we investigate by means of a user study how a human-like handling of categorical variables affects users’ perception of explanations for a state-of-the-art counterfactual explanation approach. The results show that using a human-like handling of categorical variables leads to the generation of explanations preferred by users. Moreover, we find that the handling of categorical variables affects users’ perceptions of the concreteness, coherence, and relevance of the generated counterfactual explanations.

Paper Number

363

Comments

Track 5: Data Science & Business Analytics

Share

COinS