Loading...
Description
The need to derive explanations from machine learning (ML)-based AI systems has been addressed in recent research due to the opaqueness of their processing.However, a significant amount of productive AI systems are not based on ML but are expert systems including strong opaqueness.A resulting lack of understanding causes massive inefficiencies in business processes that involve opaque expert systems. This work uses recent research interest in explainable AI (XAI) to generate knowledge for the design of explanations in constraint-based expert systems.Following the Design Science Research paradigm, we develop design requirements and design principles. Subsequently, we design an artifact and evaluate the artifact in two experiments. We observe the following phenomena. First, global explanations in a textual format were well-received. Second, abstract local explanations improved comprehensibility. Third, contrastive explanations successfully assisted in the resolution of contradictions. Finally, a local tree-based explanation was perceived as challenging to understand.
Recommended Citation
Bode, Jan; Schemmer, Max; and Balyo, Tomáš, "Explainable AI for Constraint-Based Expert Systems" (2022). Wirtschaftsinformatik 2022 Proceedings. 8.
https://aisel.aisnet.org/wi2022/student_track/student_track/8
Explainable AI for Constraint-Based Expert Systems
The need to derive explanations from machine learning (ML)-based AI systems has been addressed in recent research due to the opaqueness of their processing.However, a significant amount of productive AI systems are not based on ML but are expert systems including strong opaqueness.A resulting lack of understanding causes massive inefficiencies in business processes that involve opaque expert systems. This work uses recent research interest in explainable AI (XAI) to generate knowledge for the design of explanations in constraint-based expert systems.Following the Design Science Research paradigm, we develop design requirements and design principles. Subsequently, we design an artifact and evaluate the artifact in two experiments. We observe the following phenomena. First, global explanations in a textual format were well-received. Second, abstract local explanations improved comprehensibility. Third, contrastive explanations successfully assisted in the resolution of contradictions. Finally, a local tree-based explanation was perceived as challenging to understand.