Paper Type

Complete

Paper Number

1244

Description

Artificial intelligence (AI) supports various healthcare areas, e.g., patient management and self-diagnostics. Especially in healthcare, where transparency and interpretability of AI applications are crucial, AI’s inherent black-box characteristic poses a significant adoption challenge. Therefore, explainable AI (XAI), characterized by AI supplemented or complemented with explanation techniques, becomes imperative. Global and local explanations provide patients using AI in healthcare applications with explanations regarding the general model or individual predictions. Hybrid explanations combine both for patients, potentially increasing understandability, i.e., application explainability and decision transparency, but may come with an increased perceived workload. Leveraging an n=350 between-subject vignette study, we find evidence for XAI-support improving understandability and increasing perceived workload compared to conventional AI-support. Comparing hybrid to non-hybrid explanations, we find evidence that participants perceive the former as improving understandability while increasing workload. Finally, we find support for technology affinity and AI literacy moderating the effects of hybrid explanations on understandability.

Comments

Healthcare

Share

COinS
 
Jul 2nd, 12:00 AM

Understanding (X)AI in Healthcare: Patients’ Perspective on Global and Local Explanations

Artificial intelligence (AI) supports various healthcare areas, e.g., patient management and self-diagnostics. Especially in healthcare, where transparency and interpretability of AI applications are crucial, AI’s inherent black-box characteristic poses a significant adoption challenge. Therefore, explainable AI (XAI), characterized by AI supplemented or complemented with explanation techniques, becomes imperative. Global and local explanations provide patients using AI in healthcare applications with explanations regarding the general model or individual predictions. Hybrid explanations combine both for patients, potentially increasing understandability, i.e., application explainability and decision transparency, but may come with an increased perceived workload. Leveraging an n=350 between-subject vignette study, we find evidence for XAI-support improving understandability and increasing perceived workload compared to conventional AI-support. Comparing hybrid to non-hybrid explanations, we find evidence that participants perceive the former as improving understandability while increasing workload. Finally, we find support for technology affinity and AI literacy moderating the effects of hybrid explanations on understandability.

When commenting on articles, please be friendly, welcoming, respectful and abide by the AIS eLibrary Discussion Thread Code of Conduct posted here.