Paper Type

ERF

Abstract

In Information Systems (IS), the trustworthiness and transparency of AI models are critical for adoption and practical use. Explainable AI (XAI) methods, such as LIME, SHAP, and Integrated Gradients, offer local interpretability by attributing model predictions to input features. However, existing approaches suffer from instability, where small changes in input can lead to significant variations in explanations. In this paper, we introduce Curvature-Informed Local Explanations (CILE), a novel algorithm that integrates second-order derivative (Hessian) information into gradient-based explanations to improve the stability of interpretation. We present the design rationale, the mathematical formulation of CILE, and empirical evaluations showing that it provides more consistent explanations than existing methods without sacrificing fidelity, accuracy, or computational feasibility.

Paper Number

2326

Author Connect URL

https://authorconnect.aisnet.org/conferences/AMCIS2025/papers/2326

Comments

SIGDSA

Author Connect Link

Share

COinS
 
Aug 15th, 12:00 AM

Curvature-Informed Local Explanations (CILE): Improving Stability and Trustworthiness in Explainable AI

In Information Systems (IS), the trustworthiness and transparency of AI models are critical for adoption and practical use. Explainable AI (XAI) methods, such as LIME, SHAP, and Integrated Gradients, offer local interpretability by attributing model predictions to input features. However, existing approaches suffer from instability, where small changes in input can lead to significant variations in explanations. In this paper, we introduce Curvature-Informed Local Explanations (CILE), a novel algorithm that integrates second-order derivative (Hessian) information into gradient-based explanations to improve the stability of interpretation. We present the design rationale, the mathematical formulation of CILE, and empirical evaluations showing that it provides more consistent explanations than existing methods without sacrificing fidelity, accuracy, or computational feasibility.

When commenting on articles, please be friendly, welcoming, respectful and abide by the AIS eLibrary Discussion Thread Code of Conduct posted here.