Loading...

Media is loading
 

Description

The need to derive explanations from machine learning (ML)-based AI systems has been addressed in recent research due to the opaqueness of their processing.However, a significant amount of productive AI systems are not based on ML but are expert systems including strong opaqueness.A resulting lack of understanding causes massive inefficiencies in business processes that involve opaque expert systems. This work uses recent research interest in explainable AI (XAI) to generate knowledge for the design of explanations in constraint-based expert systems.Following the Design Science Research paradigm, we develop design requirements and design principles. Subsequently, we design an artifact and evaluate the artifact in two experiments. We observe the following phenomena. First, global explanations in a textual format were well-received. Second, abstract local explanations improved comprehensibility. Third, contrastive explanations successfully assisted in the resolution of contradictions. Finally, a local tree-based explanation was perceived as challenging to understand.

Share

COinS
 
Jan 17th, 12:00 AM

Explainable AI for Constraint-Based Expert Systems

The need to derive explanations from machine learning (ML)-based AI systems has been addressed in recent research due to the opaqueness of their processing.However, a significant amount of productive AI systems are not based on ML but are expert systems including strong opaqueness.A resulting lack of understanding causes massive inefficiencies in business processes that involve opaque expert systems. This work uses recent research interest in explainable AI (XAI) to generate knowledge for the design of explanations in constraint-based expert systems.Following the Design Science Research paradigm, we develop design requirements and design principles. Subsequently, we design an artifact and evaluate the artifact in two experiments. We observe the following phenomena. First, global explanations in a textual format were well-received. Second, abstract local explanations improved comprehensibility. Third, contrastive explanations successfully assisted in the resolution of contradictions. Finally, a local tree-based explanation was perceived as challenging to understand.