Abstract

Machine learning enables computers to learn from data and fuels artificial intelligence systems with capabilities to make even super-human decisions. Yet, despite already outperforming preexisting methods and even humans for specific tasks in gaming or healthcare, machine learning faces several challenges related to the uncertainty of the analysis result’s trustworthiness beyond training and validation data. This is because many well-performing algorithms are black boxes to the user who – consequently – cannot trace and understand the reasoning behind a model’s prediction when taking or executing a decision. In response, explainable AI has emerged as a field of study to glass box the former black box models. However, current explainable AI research often neglects the human factor. Against this backdrop, we study from a user perspective the trade-off between completeness, as the accuracy of a model’s prediction, and interpretability, as the way of model prediction understanding. In particular, we evaluate how existing explainable AI model transfers can be used with a focus on the human recipient and derive recommendations for improvements. As a first step, we have identified eleven types of glass box models and defined the fundamentals of a well-founded survey design to understand better the factors that support interpretability and weighing them against improved yet black-boxed completeness.

Share

COinS
 

When commenting on articles, please be friendly, welcoming, respectful and abide by the AIS eLibrary Discussion Thread Code of Conduct posted here.