Paper Number

1112

Paper Type

Completed

Description

The deployment of machine learning (ML)-based decision support systems (DSSs) in high-risk environments such as radiology is increasing. Despite having achieved high decision accuracy, they are prone to errors. Thus, they are primarily used to assist radiologists in their decision making. However, collaborative decision making poses risks to the decision maker, e.g. automation bias and long-term performance degradation. To address these issues, we propose combining findings of the research streams of explainable artificial intelligence and education to promote human learning through interaction with ML-based DSSs. We provided radiologists with explainable vs non-explainable decision support that was high- vs low-performing in a between-subject experimental study to support manual segmentation of 690 brain tumor scans. Our results show that explainable ML-based DSSs improved human learning outcomes and prevented false learning triggered by incorrect decision support. In fact, radiologists were able to learn from errors made by the low-performing explainable ML-based DSS.

Comments

03-Learning

Share

COinS
Best Paper Nominee badge
 
Dec 11th, 12:00 AM

Promoting Learning Through Explainable Artificial Intelligence: An Experimental Study in Radiology

The deployment of machine learning (ML)-based decision support systems (DSSs) in high-risk environments such as radiology is increasing. Despite having achieved high decision accuracy, they are prone to errors. Thus, they are primarily used to assist radiologists in their decision making. However, collaborative decision making poses risks to the decision maker, e.g. automation bias and long-term performance degradation. To address these issues, we propose combining findings of the research streams of explainable artificial intelligence and education to promote human learning through interaction with ML-based DSSs. We provided radiologists with explainable vs non-explainable decision support that was high- vs low-performing in a between-subject experimental study to support manual segmentation of 690 brain tumor scans. Our results show that explainable ML-based DSSs improved human learning outcomes and prevented false learning triggered by incorrect decision support. In fact, radiologists were able to learn from errors made by the low-performing explainable ML-based DSS.

When commenting on articles, please be friendly, welcoming, respectful and abide by the AIS eLibrary Discussion Thread Code of Conduct posted here.