Loading...

Media is loading
 

Paper Number

2695

Paper Type

short

Description

Advanced AI models are powerful in making accurate predictions for complex problems. However, these models often operate as black boxes. This lack of interpretability poses significant challenges, especially in high-stakes applications such as finance, healthcare, and criminal justice. Explainable AI seeks to address the challenges by developing methods that can provide meaningful explanations for humans to understand. When black box models are used for prediction, they inevitably produce errors. It is important to appropriately explain incorrect predictions. This problem, however, has not been addressed in the literature. In this study, we propose a novel method to provide explanations for misclassified cases made by black box models. The proposed method takes a counterfactual explanation approach. It builds a decision tree to find the best counterfactual examples for explanations. Incorrect predictions are rectified using a trust score measure. We validate the proposed method in an evaluation study using real-world data.

Comments

13-DataAnalytics

Share

COinS
 
Dec 11th, 12:00 AM

Counterfactual Explanations for Incorrect Predictions Made by AI Models

Advanced AI models are powerful in making accurate predictions for complex problems. However, these models often operate as black boxes. This lack of interpretability poses significant challenges, especially in high-stakes applications such as finance, healthcare, and criminal justice. Explainable AI seeks to address the challenges by developing methods that can provide meaningful explanations for humans to understand. When black box models are used for prediction, they inevitably produce errors. It is important to appropriately explain incorrect predictions. This problem, however, has not been addressed in the literature. In this study, we propose a novel method to provide explanations for misclassified cases made by black box models. The proposed method takes a counterfactual explanation approach. It builds a decision tree to find the best counterfactual examples for explanations. Incorrect predictions are rectified using a trust score measure. We validate the proposed method in an evaluation study using real-world data.

When commenting on articles, please be friendly, welcoming, respectful and abide by the AIS eLibrary Discussion Thread Code of Conduct posted here.