Abstract

Clinical decision support systems (CDSSs) based on machine learning (ML) promise to improve medical care. Technically, such CDSSs are already feasible, but physicians have been skeptical about their application. In particular, their opacity is a major concern, as erroneous outputs from ML-based CDSSs could be overlooked by physicians, potentially causing serious consequences for patients. Research in explainable AI (XAI) offers methods that hold the potential to increase the explainability of black-box ML systems. This could significantly accelerate the application of ML-based CDSSs in medicine. However, XAI research to date has mainly been technically driven and neglects the needs of end-users. To better engage the users of ML-based CDSSs, we applied a design science approach to develop a design for explainable ML-based CDSSs that incorporates insights from XAI literature while simultaneously addressing physicians’ needs. This design comprises five design principles that designers of ML-based CDSSs can apply to implement user-centered explanations and that are instantiated in a prototype of an explainable ML-based CDSS for lung nodule classification. We rooted the design principles, and the prototype derived thereof in a body of justificatory knowledge consisting of XAI literature, the concept of usability, and an online survey study among N1=57 physicians. We refined the design principles and their instantiation by conducting walk-throughs with N2=6 radiologists. A final experiment with N3=45 radiologists demonstrated that our design results in physicians perceiving the ML-based CDSS as more explainable and usable in terms of the required cognitive effort than a system without explanations.

DOI

10.17705/1jais.00820

Share

COinS