Paper Type
Complete
Paper Number
1244
Description
Artificial intelligence (AI) supports various healthcare areas, e.g., patient management and self-diagnostics. Especially in healthcare, where transparency and interpretability of AI applications are crucial, AI’s inherent black-box characteristic poses a significant adoption challenge. Therefore, explainable AI (XAI), characterized by AI supplemented or complemented with explanation techniques, becomes imperative. Global and local explanations provide patients using AI in healthcare applications with explanations regarding the general model or individual predictions. Hybrid explanations combine both for patients, potentially increasing understandability, i.e., application explainability and decision transparency, but may come with an increased perceived workload. Leveraging an n=350 between-subject vignette study, we find evidence for XAI-support improving understandability and increasing perceived workload compared to conventional AI-support. Comparing hybrid to non-hybrid explanations, we find evidence that participants perceive the former as improving understandability while increasing workload. Finally, we find support for technology affinity and AI literacy moderating the effects of hybrid explanations on understandability.
Recommended Citation
Weber, Patrick; Heigl, Rebecca Maria; and Baum, Lorenz, "Understanding (X)AI in Healthcare: Patients’ Perspective on Global and Local Explanations" (2024). PACIS 2024 Proceedings. 10.
https://aisel.aisnet.org/pacis2024/track11_healthit/track11_healthit/10
Understanding (X)AI in Healthcare: Patients’ Perspective on Global and Local Explanations
Artificial intelligence (AI) supports various healthcare areas, e.g., patient management and self-diagnostics. Especially in healthcare, where transparency and interpretability of AI applications are crucial, AI’s inherent black-box characteristic poses a significant adoption challenge. Therefore, explainable AI (XAI), characterized by AI supplemented or complemented with explanation techniques, becomes imperative. Global and local explanations provide patients using AI in healthcare applications with explanations regarding the general model or individual predictions. Hybrid explanations combine both for patients, potentially increasing understandability, i.e., application explainability and decision transparency, but may come with an increased perceived workload. Leveraging an n=350 between-subject vignette study, we find evidence for XAI-support improving understandability and increasing perceived workload compared to conventional AI-support. Comparing hybrid to non-hybrid explanations, we find evidence that participants perceive the former as improving understandability while increasing workload. Finally, we find support for technology affinity and AI literacy moderating the effects of hybrid explanations on understandability.
When commenting on articles, please be friendly, welcoming, respectful and abide by the AIS eLibrary Discussion Thread Code of Conduct posted here.
Comments
Healthcare