Paper Type
ERF
Abstract
Balancing human-likeness in healthcare AI agents is critical to ensuring trust, engagement, and safe adoption. Guided by social response theory and uncanny valley research, this study explores how varying degrees of anthropomorphic design influence user trust and comfort. We conducted a literature review on generative AI systems to identify key anthropomorphic features. Seven domain experts independently evaluated these features, selecting features to cluster into three progressive bundles ranging from zero to medium risk for possible uncanny valley to build three prototype designs for an AI patient-intake agent.
The prototypes will be assessed in a vignette study using an adapted Primary Care Assessment Survey to measure trust and patient-centeredness. Early findings show that some technically advanced features (e.g., adaptive context responses) are less likely to trigger uncanny valley effects, whereas others (e.g., human-like avatars, humour) heighten discomfort. Results provide design guidance for healthcare AI agents that optimize trust while avoiding over-humanization.
Recommended Citation
Nabavian, Sanaz; Grabis, Janis; and Dohan, Michael, "Balancing Anthropomorphic Design in Healthcare AI Agents" (2025). AMCIS 2025 Proceedings. 1.
https://aisel.aisnet.org/amcis2025/paperathon/paperathon/1
Balancing Anthropomorphic Design in Healthcare AI Agents
Balancing human-likeness in healthcare AI agents is critical to ensuring trust, engagement, and safe adoption. Guided by social response theory and uncanny valley research, this study explores how varying degrees of anthropomorphic design influence user trust and comfort. We conducted a literature review on generative AI systems to identify key anthropomorphic features. Seven domain experts independently evaluated these features, selecting features to cluster into three progressive bundles ranging from zero to medium risk for possible uncanny valley to build three prototype designs for an AI patient-intake agent.
The prototypes will be assessed in a vignette study using an adapted Primary Care Assessment Survey to measure trust and patient-centeredness. Early findings show that some technically advanced features (e.g., adaptive context responses) are less likely to trigger uncanny valley effects, whereas others (e.g., human-like avatars, humour) heighten discomfort. Results provide design guidance for healthcare AI agents that optimize trust while avoiding over-humanization.
When commenting on articles, please be friendly, welcoming, respectful and abide by the AIS eLibrary Discussion Thread Code of Conduct posted here.