Loading...
Paper Number
2646
Paper Type
Complete
Abstract
Artificial Intelligence (AI) and especially Machine Learning (ML) models are ubiquitous in research, business and society. However, the predictions of many ML models are often not transparent for users due to their black box nature. Therefore, several Explainable AI (XAI) methods aiming to provide local explanations for individual ML model predictions have been proposed. Importantly, existing XAI methods relying on surrogate models still have critical weaknesses regarding fidelity, robustness and sensitivity. Thus, we propose a novel method that avoids building surrogate models but instead represents the actual decision boundaries and class subspaces of ML models in a functional and definite manner. Further, we introduce two well-founded measures for the sensitivity of individual data instances regarding changes of their features values. We theoretically and empirically evaluate the fidelity and robustness of our method (on three real-world datasets) outperforming existing methods and demonstrate the validity and meaningfulness of our sensitivity measures.
Recommended Citation
Heinrich, Bernd; Krapf, Thomas; and Miethaner, Paul, "EXPLORE: A Novel Method for Local Explanations" (2024). ICIS 2024 Proceedings. 21.
https://aisel.aisnet.org/icis2024/aiinbus/aiinbus/21
EXPLORE: A Novel Method for Local Explanations
Artificial Intelligence (AI) and especially Machine Learning (ML) models are ubiquitous in research, business and society. However, the predictions of many ML models are often not transparent for users due to their black box nature. Therefore, several Explainable AI (XAI) methods aiming to provide local explanations for individual ML model predictions have been proposed. Importantly, existing XAI methods relying on surrogate models still have critical weaknesses regarding fidelity, robustness and sensitivity. Thus, we propose a novel method that avoids building surrogate models but instead represents the actual decision boundaries and class subspaces of ML models in a functional and definite manner. Further, we introduce two well-founded measures for the sensitivity of individual data instances regarding changes of their features values. We theoretically and empirically evaluate the fidelity and robustness of our method (on three real-world datasets) outperforming existing methods and demonstrate the validity and meaningfulness of our sensitivity measures.
When commenting on articles, please be friendly, welcoming, respectful and abide by the AIS eLibrary Discussion Thread Code of Conduct posted here.
Comments
10-AI