Paper Type
Short
Paper Number
PACIS2025-1811
Description
This study employs the Stimulus-Organism-Response (S-O-R) framework to investigate how risk prompts in medical LLMs influence patients’ offline medical willingness via perceived risk, moderated by disease severity. We propose that risk prompts placed at the beginning of interactions will more easily enhance patients’ perceived risk, thereby increasing their willingness to seek offline medical care. Both identity prompts and prescription prompts can elevate perceived risk to varying degrees, thereby boosting offline medical willingness. Additionally, disease severity moderates the relationship between different risk prompts and patients’ perceived risk. Theoretically, this study extends SOR applications on AI-driven healthcare by elucidating risk communication mechanisms. Practically, it offers actionable strategies for designing LLM interfaces that balance efficiency and stafety, addressing behavioral and operational challenges in AI-integrated healthcare systems. This work advances human-AI interaction research, informing policies to optimize resource allocation and patient outcomes in critical medical contexts.
Recommended Citation
Tang, Xiaofan; Yin, Jinmei; and Liu, Hua, "The role of risk prompts of the medical large language model on patients’ offline medical willingness: Based on Stimulus-Organism-Response theory" (2025). PACIS 2025 Proceedings. 21.
https://aisel.aisnet.org/pacis2025/ishealthcare/ishealthcare/21
The role of risk prompts of the medical large language model on patients’ offline medical willingness: Based on Stimulus-Organism-Response theory
This study employs the Stimulus-Organism-Response (S-O-R) framework to investigate how risk prompts in medical LLMs influence patients’ offline medical willingness via perceived risk, moderated by disease severity. We propose that risk prompts placed at the beginning of interactions will more easily enhance patients’ perceived risk, thereby increasing their willingness to seek offline medical care. Both identity prompts and prescription prompts can elevate perceived risk to varying degrees, thereby boosting offline medical willingness. Additionally, disease severity moderates the relationship between different risk prompts and patients’ perceived risk. Theoretically, this study extends SOR applications on AI-driven healthcare by elucidating risk communication mechanisms. Practically, it offers actionable strategies for designing LLM interfaces that balance efficiency and stafety, addressing behavioral and operational challenges in AI-integrated healthcare systems. This work advances human-AI interaction research, informing policies to optimize resource allocation and patient outcomes in critical medical contexts.
Comments
Healthcare