Abstract

While Explainable Artificial Intelligence (XAI) has advanced, its practical adoption is hindered by a gap between how explanations are generated and how humans’ reason. This paper proposes and evaluates an iterative framework that integrates expert knowledge elicitation to refine neural network models using the IANN (Importance Aided Neural Network) method and generates selective explanations for classification tasks on tabular data. We hypothesize that two practices can improve AI systems: (1) incorporating expert knowledge during modeling, which enhances model performance, and (2) applying selective explanations, which calibrate users’ confidence in the explanations, compared to non-selective approaches. The expected outcome is a more robust, reliable, and transparent AI system, with direct applications in critical sectors like finance and healthcare.

Share

COinS