Paper Type
Short
Paper Number
1582
Description
Nowadays AI has become a popular research topic and has rapidly developed in the medical field. Despite AI becoming increasingly powerful and multifunctional, it has led to the problem of the 'black box', creating a trust gap between AI and its users. This gap could hinder the adoption of AI methods and decrease the development of AI in the medical sector. The emergence of XAI (Explainable AI) offers a potential solution by explaining the predictive results of models, which may increase users' trust in these models. This study integrates the XAI algorithms LIME and SHAP into an AI-based fall prevention system. We designed an experiment to observe whether the integration of XAI into the fall prevention system impacts three metrics: user trust, satisfaction with explanations, and comprehension of explanations, as well as to assess how these effects might differ across models with varying accuracy levels.
Recommended Citation
Hou, Yun Ti and Chang, Hsin-Lu, "Building XAI for Fall Prevention System: An Evaluation of the Potential of LIME and SHAP in Bridging Trust Gap" (2024). PACIS 2024 Proceedings. 9.
https://aisel.aisnet.org/pacis2024/track11_healthit/track11_healthit/9
Building XAI for Fall Prevention System: An Evaluation of the Potential of LIME and SHAP in Bridging Trust Gap
Nowadays AI has become a popular research topic and has rapidly developed in the medical field. Despite AI becoming increasingly powerful and multifunctional, it has led to the problem of the 'black box', creating a trust gap between AI and its users. This gap could hinder the adoption of AI methods and decrease the development of AI in the medical sector. The emergence of XAI (Explainable AI) offers a potential solution by explaining the predictive results of models, which may increase users' trust in these models. This study integrates the XAI algorithms LIME and SHAP into an AI-based fall prevention system. We designed an experiment to observe whether the integration of XAI into the fall prevention system impacts three metrics: user trust, satisfaction with explanations, and comprehension of explanations, as well as to assess how these effects might differ across models with varying accuracy levels.
When commenting on articles, please be friendly, welcoming, respectful and abide by the AIS eLibrary Discussion Thread Code of Conduct posted here.
Comments
Healthcare