Abstract

The emergence of machine learning (ML) based artificial intelligence (AI) bring about fear because of its power and uncontrollability. In response, scientists and engineers are developing explainable AI (XAI) techniques to tackle this concern. However, the literature is short of a systematic approach to assess the various XAI techniques in a balanced and comprehensive manner. To address this gap, we survey the current XAI technique and propose an integrated framework with three evaluation criteria (correlation, completeness, and complexity) to evaluate XAI. Applying this framework, we find the rule extraction method is the most advanced and promising method among current XAI.

Share

COinS
 

An Integrative 3C evaluation framework for Explainable Artificial Intelligence

The emergence of machine learning (ML) based artificial intelligence (AI) bring about fear because of its power and uncontrollability. In response, scientists and engineers are developing explainable AI (XAI) techniques to tackle this concern. However, the literature is short of a systematic approach to assess the various XAI techniques in a balanced and comprehensive manner. To address this gap, we survey the current XAI technique and propose an integrated framework with three evaluation criteria (correlation, completeness, and complexity) to evaluate XAI. Applying this framework, we find the rule extraction method is the most advanced and promising method among current XAI.