Location

Hilton Hawaiian Village, Honolulu, Hawaii

Event Website

https://hicss.hawaii.edu/

Start Date

3-1-2024 12:00 AM

End Date

6-1-2024 12:00 AM

Description

Artificial intelligence is transforming clinical decision-making processes by using patient data for improved diagnosis and treatment. However, the increasing black box nature of AI systems presents comprehension challenges for users. To ensure the safe and efficient utilisation of these systems, it is essential to establish appropriate levels of trust. Accordingly, this study aims to answer the following research question: What factors influence medical practitioners' trust calibration in their interactions with AI-based clinical decision support systems (CDSSs)? Applying an exploratory approach, the data is collected through semi-structured interviews with medical and AI experts, and is examined through qualitative content analysis. The results indicate that perceived understandability, technical competence and reliability of the system, along with other userand context-related factors, impact physicians’ trust calibration in AI-based CDSSs. As there is limited literature on this specific topic, our findings provide a foundation for future studies aiming to delve deeper into this field.

Share

COinS
 
Jan 3rd, 12:00 AM Jan 6th, 12:00 AM

Explainable AI in healthcare: Factors influencing medical practitioners’ trust calibration in collaborative tasks

Hilton Hawaiian Village, Honolulu, Hawaii

Artificial intelligence is transforming clinical decision-making processes by using patient data for improved diagnosis and treatment. However, the increasing black box nature of AI systems presents comprehension challenges for users. To ensure the safe and efficient utilisation of these systems, it is essential to establish appropriate levels of trust. Accordingly, this study aims to answer the following research question: What factors influence medical practitioners' trust calibration in their interactions with AI-based clinical decision support systems (CDSSs)? Applying an exploratory approach, the data is collected through semi-structured interviews with medical and AI experts, and is examined through qualitative content analysis. The results indicate that perceived understandability, technical competence and reliability of the system, along with other userand context-related factors, impact physicians’ trust calibration in AI-based CDSSs. As there is limited literature on this specific topic, our findings provide a foundation for future studies aiming to delve deeper into this field.

https://aisel.aisnet.org/hicss-57/hc/process/7