Paper Type

ERF

Abstract

This study focuses on large language models-based healthcare conversational agents (LLMs-based HCAs). By using a three-stage mixed method and integrating the Moral Foundations Theory and the Coping Model of User Adaptation, it explores users' perceptions of moral risks, adaptation mechanisms, and the importance ranking of risk factors. The research aims to fill the gaps in existing studies, provide a theoretical and empirical basis for designing more ethically adaptable LLMs-based HCAs, and promote the safe and effective application of this technology in the healthcare field.

Paper Number

1541

Author Connect URL

https://authorconnect.aisnet.org/conferences/AMCIS2025/papers/1541

Comments

SIGCULTURE

Author Connect Link

Share

COinS
 
Aug 15th, 12:00 AM

LLMs-based Healthcare Conversational Agents: How do Users Understand and Adapt to the Moral Risks?

This study focuses on large language models-based healthcare conversational agents (LLMs-based HCAs). By using a three-stage mixed method and integrating the Moral Foundations Theory and the Coping Model of User Adaptation, it explores users' perceptions of moral risks, adaptation mechanisms, and the importance ranking of risk factors. The research aims to fill the gaps in existing studies, provide a theoretical and empirical basis for designing more ethically adaptable LLMs-based HCAs, and promote the safe and effective application of this technology in the healthcare field.

When commenting on articles, please be friendly, welcoming, respectful and abide by the AIS eLibrary Discussion Thread Code of Conduct posted here.