Paper Type

Complete

Paper Number

1718

Description

Recent advances in machine learning (ML) algorithms have motivated their use for automated Decision Support Systems (DSS). In healthcare domain, ML-based DSS enable providers to analyze large amounts of patient data and complex images quickly. However, providers find it difficult to interpret ML predictions due to their ‘black box’ reasonings. To facilitate meaningful interpretations, ML-based DSS should include explanation facilities as recommended in information systems (IS) research. For example, a wound care DSS should allow providers to understand the reasoning (e.g., amount and presence of unhealthy tissues) behind referral decisions. We present a ML-based DSS that provides global (reliance on domain knowledge) and local (reasoning for predicting an instance) explanations for wound care decisions. We use Shapley explanations for a logistic regression (trained on wound visual features) which outperformed other classifiers when predicting referral decisions (F-1 =0.938) and demonstrate its applicability in a wound care use-scenario. Findings suggest similar approach can be applied for other complex decision problems.

Share

COinS
Top 25 Percent Paper badge
 
Aug 9th, 12:00 AM

An Explainable Machine Learning Model for Chronic Wound Management Decisions

Recent advances in machine learning (ML) algorithms have motivated their use for automated Decision Support Systems (DSS). In healthcare domain, ML-based DSS enable providers to analyze large amounts of patient data and complex images quickly. However, providers find it difficult to interpret ML predictions due to their ‘black box’ reasonings. To facilitate meaningful interpretations, ML-based DSS should include explanation facilities as recommended in information systems (IS) research. For example, a wound care DSS should allow providers to understand the reasoning (e.g., amount and presence of unhealthy tissues) behind referral decisions. We present a ML-based DSS that provides global (reliance on domain knowledge) and local (reasoning for predicting an instance) explanations for wound care decisions. We use Shapley explanations for a logistic regression (trained on wound visual features) which outperformed other classifiers when predicting referral decisions (F-1 =0.938) and demonstrate its applicability in a wound care use-scenario. Findings suggest similar approach can be applied for other complex decision problems.

When commenting on articles, please be friendly, welcoming, respectful and abide by the AIS eLibrary Discussion Thread Code of Conduct posted here.