Data Science and Analytics for Decision Support (SIG DSA)
Loading...
Paper Type
Complete
Paper Number
1718
Description
Recent advances in machine learning (ML) algorithms have motivated their use for automated Decision Support Systems (DSS). In healthcare domain, ML-based DSS enable providers to analyze large amounts of patient data and complex images quickly. However, providers find it difficult to interpret ML predictions due to their ‘black box’ reasonings. To facilitate meaningful interpretations, ML-based DSS should include explanation facilities as recommended in information systems (IS) research. For example, a wound care DSS should allow providers to understand the reasoning (e.g., amount and presence of unhealthy tissues) behind referral decisions. We present a ML-based DSS that provides global (reliance on domain knowledge) and local (reasoning for predicting an instance) explanations for wound care decisions. We use Shapley explanations for a logistic regression (trained on wound visual features) which outperformed other classifiers when predicting referral decisions (F-1 =0.938) and demonstrate its applicability in a wound care use-scenario. Findings suggest similar approach can be applied for other complex decision problems.
Recommended Citation
Mombini, Haadi; Tulu, Bengisu; Strong, Diane; Agu, Emmanuel O.; Lindsay, Clifford; Loretz, Lorraine; Pedersen, Peder C.; and Dunn, Raymond, "An Explainable Machine Learning Model for Chronic Wound Management Decisions" (2021). AMCIS 2021 Proceedings. 18.
https://aisel.aisnet.org/amcis2021/data_science_decision_support/data_science_decision_support/18
An Explainable Machine Learning Model for Chronic Wound Management Decisions
Recent advances in machine learning (ML) algorithms have motivated their use for automated Decision Support Systems (DSS). In healthcare domain, ML-based DSS enable providers to analyze large amounts of patient data and complex images quickly. However, providers find it difficult to interpret ML predictions due to their ‘black box’ reasonings. To facilitate meaningful interpretations, ML-based DSS should include explanation facilities as recommended in information systems (IS) research. For example, a wound care DSS should allow providers to understand the reasoning (e.g., amount and presence of unhealthy tissues) behind referral decisions. We present a ML-based DSS that provides global (reliance on domain knowledge) and local (reasoning for predicting an instance) explanations for wound care decisions. We use Shapley explanations for a logistic regression (trained on wound visual features) which outperformed other classifiers when predicting referral decisions (F-1 =0.938) and demonstrate its applicability in a wound care use-scenario. Findings suggest similar approach can be applied for other complex decision problems.
When commenting on articles, please be friendly, welcoming, respectful and abide by the AIS eLibrary Discussion Thread Code of Conduct posted here.