Abstract
The emergence of machine learning (ML) based artificial intelligence (AI) bring about fear because of its power and uncontrollability. In response, scientists and engineers are developing explainable AI (XAI) techniques to tackle this concern. However, the literature is short of a systematic approach to assess the various XAI techniques in a balanced and comprehensive manner. To address this gap, we survey the current XAI technique and propose an integrated framework with three evaluation criteria (correlation, completeness, and complexity) to evaluate XAI. Applying this framework, we find the rule extraction method is the most advanced and promising method among current XAI.
Recommended Citation
Cui, Xiaocong; Lee, Jung Min; and Hsieh, J. Po-An, "An Integrative 3C evaluation framework for Explainable Artificial Intelligence" (2019). AMCIS 2019 Proceedings. 10.
https://aisel.aisnet.org/amcis2019/ai_semantic_for_intelligent_info_systems/ai_semantic_for_intelligent_info_systems/10
An Integrative 3C evaluation framework for Explainable Artificial Intelligence
The emergence of machine learning (ML) based artificial intelligence (AI) bring about fear because of its power and uncontrollability. In response, scientists and engineers are developing explainable AI (XAI) techniques to tackle this concern. However, the literature is short of a systematic approach to assess the various XAI techniques in a balanced and comprehensive manner. To address this gap, we survey the current XAI technique and propose an integrated framework with three evaluation criteria (correlation, completeness, and complexity) to evaluate XAI. Applying this framework, we find the rule extraction method is the most advanced and promising method among current XAI.