Loading...

Media is loading
 

Description

Explainable Artificial Intelligence (XAI) is currently an important topic for the application of Machine Learning (ML) in high-stakes decision scenarios. Related research focuses on evaluating ML algorithms in terms of interpretability. However, providing a human understandable explanation of an intelligent system does not only relate to the used ML algorithm. The data and features used also have a considerable impact on interpretability. In this paper, we develop a taxonomy for describing XAI systems based on aspects about the algorithm and data. The proposed taxonomy gives researchers and practitioners opportunities to describe and evaluate current XAI systems with respect to interpretability and guides the future development of this class of systems.

Share

COinS
 
Jan 17th, 12:00 AM

Towards a model- and data-focused taxonomy of XAI systems

Explainable Artificial Intelligence (XAI) is currently an important topic for the application of Machine Learning (ML) in high-stakes decision scenarios. Related research focuses on evaluating ML algorithms in terms of interpretability. However, providing a human understandable explanation of an intelligent system does not only relate to the used ML algorithm. The data and features used also have a considerable impact on interpretability. In this paper, we develop a taxonomy for describing XAI systems based on aspects about the algorithm and data. The proposed taxonomy gives researchers and practitioners opportunities to describe and evaluate current XAI systems with respect to interpretability and guides the future development of this class of systems.