Paper Type
Short
Paper Number
PACIS2025-1563
Description
This study investigates how the memorization capability of LLM-based Healthcare Conversational Agents (HCAs) influences users' privacy disclosure willingness. Drawing on privacy calculus theory, we propose that users weigh perceived risks against perceived benefits. A quasi-experiment (N=84) compared HCAs with versus without memorization capability during the experiment task. Results show that memorization capability significantly reduced users' willingness to disclose private data. Mediation analysis revealed this effect was primarily driven by heightened perceived risks, not diminished perceived benefits. This underscores the dominance of risk perception over benefit considerations in privacy decisions concerning LLM-based HCAs. The findings highlight the critical need for transparency mechanisms (e.g., memory visualization tools) to address the privacy risks amplified by LLM memorization in healthcare interactions.
Recommended Citation
Yang, Yi, "The Impact of Memorization Capability on User’s Privacy Disclosure Behavior in LLM-based Healthcare Conversational Agents" (2025). PACIS 2025 Proceedings. 4.
https://aisel.aisnet.org/pacis2025/security/security/4
The Impact of Memorization Capability on User’s Privacy Disclosure Behavior in LLM-based Healthcare Conversational Agents
This study investigates how the memorization capability of LLM-based Healthcare Conversational Agents (HCAs) influences users' privacy disclosure willingness. Drawing on privacy calculus theory, we propose that users weigh perceived risks against perceived benefits. A quasi-experiment (N=84) compared HCAs with versus without memorization capability during the experiment task. Results show that memorization capability significantly reduced users' willingness to disclose private data. Mediation analysis revealed this effect was primarily driven by heightened perceived risks, not diminished perceived benefits. This underscores the dominance of risk perception over benefit considerations in privacy decisions concerning LLM-based HCAs. The findings highlight the critical need for transparency mechanisms (e.g., memory visualization tools) to address the privacy risks amplified by LLM memorization in healthcare interactions.
Comments
Security