Location
Hilton Hawaiian Village, Honolulu, Hawaii
Event Website
https://hicss.hawaii.edu/
Start Date
3-1-2024 12:00 AM
End Date
6-1-2024 12:00 AM
Description
Artificial intelligence is transforming clinical decision-making processes by using patient data for improved diagnosis and treatment. However, the increasing black box nature of AI systems presents comprehension challenges for users. To ensure the safe and efficient utilisation of these systems, it is essential to establish appropriate levels of trust. Accordingly, this study aims to answer the following research question: What factors influence medical practitioners' trust calibration in their interactions with AI-based clinical decision support systems (CDSSs)? Applying an exploratory approach, the data is collected through semi-structured interviews with medical and AI experts, and is examined through qualitative content analysis. The results indicate that perceived understandability, technical competence and reliability of the system, along with other userand context-related factors, impact physicians’ trust calibration in AI-based CDSSs. As there is limited literature on this specific topic, our findings provide a foundation for future studies aiming to delve deeper into this field.
Recommended Citation
Darvish, Mahdieh; Holst, Jan-Hendrik; and Bick, Markus, "Explainable AI in healthcare: Factors influencing medical practitioners’ trust calibration in collaborative tasks" (2024). Hawaii International Conference on System Sciences 2024 (HICSS-57). 7.
https://aisel.aisnet.org/hicss-57/hc/process/7
Explainable AI in healthcare: Factors influencing medical practitioners’ trust calibration in collaborative tasks
Hilton Hawaiian Village, Honolulu, Hawaii
Artificial intelligence is transforming clinical decision-making processes by using patient data for improved diagnosis and treatment. However, the increasing black box nature of AI systems presents comprehension challenges for users. To ensure the safe and efficient utilisation of these systems, it is essential to establish appropriate levels of trust. Accordingly, this study aims to answer the following research question: What factors influence medical practitioners' trust calibration in their interactions with AI-based clinical decision support systems (CDSSs)? Applying an exploratory approach, the data is collected through semi-structured interviews with medical and AI experts, and is examined through qualitative content analysis. The results indicate that perceived understandability, technical competence and reliability of the system, along with other userand context-related factors, impact physicians’ trust calibration in AI-based CDSSs. As there is limited literature on this specific topic, our findings provide a foundation for future studies aiming to delve deeper into this field.
https://aisel.aisnet.org/hicss-57/hc/process/7