AIS Transactions on Human-Computer Interaction
Abstract
Designing AI chatbots with human-like features is a key way to promote user engagement, such as self-disclosure. Prior research has shown that anthropomorphism can foster self-disclosure intentions via systematically enhancing trust and reducing privacy concerns, a mental process captured in the privacy calculus lens. Building on this prior work, we put forth a contextual privacy calculus approach to actual disclosure behavior. We identify two salient context factors in human-chatbot interactions: psychological social distance and information sensitivity and theorize their distinct roles in shaping the privacy calculus. Through an online experiment with 222 participants, we manipulated the design to induce anthropomorphism and observed participants’ actual disclosure behavior. An ANOVA test together with Hayes’s PROCESS macro analysis showed that: 1) anthropomorphism can reduce psychological social distance but may trigger the “uncanny valley” effect, 2) privacy concerns can reduce actual disclosure, but this tendency weakens under high- sensitivity conditions, 3) trust in AI chatbots may not necessarily lead to actual disclosure. These findings highlight the need for careful anthropomorphic design to avoid its downsides. We also show that actual sharing behavior follows different mechanisms than sharing intentions. We encourage future research to explore the interplay between anthropomorphic design, context factors, and actual behavior in human-chatbot interactions.
DOI
10.17705/1thci.00237
Recommended Citation
Zhang, M.,
&
Zhu, H.
(2026).
Actual Self-disclosure to Anthropomorphic AI Chatbots: A Contextual Privacy Calculus Approach.
AIS Transactions on Human-Computer Interaction, 18(1), 31-60.
https://doi.org/10.17705/1thci.00237
DOI: 10.17705/1thci.00237
When commenting on articles, please be friendly, welcoming, respectful and abide by the AIS eLibrary Discussion Thread Code of Conduct posted here.