Paper Number



Predictive business process monitoring (PBPM) provides a set of techniques to perform different prediction tasks in running business processes, such as the next activity, the process outcome, or the remaining time. Nowadays, deep-learning-based techniques provide more accurate predictive models. However, the explainability of these models has long been neglected. The predictive quality is essential for PBPM-based decision support systems, but also its explainability for human stakeholders needs to be considered. Explainable artificial intelligence (XAI) describes different approaches to make machine-learning-based techniques explainable. To examine the current state of explainable PBPM techniques, we perform a structured and descriptive literature review. We identify explainable PBPM techniques of the domain and classify them along with different XAI-related concepts: prediction purpose, intrinsically interpretable or post-hoc, evaluation objective, and evaluation method. Based on our classification, we identify trends in the domain and remaining research gaps.



When commenting on articles, please be friendly, welcoming, respectful and abide by the AIS eLibrary Discussion Thread Code of Conduct posted here.