Paper Type
ERF
Abstract
This study focuses on large language models-based healthcare conversational agents (LLMs-based HCAs). By using a three-stage mixed method and integrating the Moral Foundations Theory and the Coping Model of User Adaptation, it explores users' perceptions of moral risks, adaptation mechanisms, and the importance ranking of risk factors. The research aims to fill the gaps in existing studies, provide a theoretical and empirical basis for designing more ethically adaptable LLMs-based HCAs, and promote the safe and effective application of this technology in the healthcare field.
Paper Number
1541
Recommended Citation
Yang, Yi, "LLMs-based Healthcare Conversational Agents: How do Users Understand and Adapt to the Moral Risks?" (2025). AMCIS 2025 Proceedings. 2.
https://aisel.aisnet.org/amcis2025/sig_culture/sig_culture/2
LLMs-based Healthcare Conversational Agents: How do Users Understand and Adapt to the Moral Risks?
This study focuses on large language models-based healthcare conversational agents (LLMs-based HCAs). By using a three-stage mixed method and integrating the Moral Foundations Theory and the Coping Model of User Adaptation, it explores users' perceptions of moral risks, adaptation mechanisms, and the importance ranking of risk factors. The research aims to fill the gaps in existing studies, provide a theoretical and empirical basis for designing more ethically adaptable LLMs-based HCAs, and promote the safe and effective application of this technology in the healthcare field.
When commenting on articles, please be friendly, welcoming, respectful and abide by the AIS eLibrary Discussion Thread Code of Conduct posted here.
Comments
SIGCULTURE