Paper Type
Complete
Abstract
As customer service chatbots become more prevalent, privacy concerns increasingly influence user trust and disclosure behavior. Guided by Communication Privacy Management (CPM) theory, this study explores how users regulate privacy in chatbot interactions. Thematic analysis of semi-structured interviews reveals five themes: privacy boundaries, collective boundaries, permeability rules, ownership and control, and linkage rules and privacy turbulence. Users manage privacy based on brand reputation, past experiences, data sensitivity, and awareness of data access. Design features like anonymity, data deletion, and customizable settings foster control, while unclear data practices and third-party sharing trigger disengagement. This study extends CPM theory to AI-mediated contexts by showing how system-level cues shape privacy management. It also introduces perceived system control and system-assumed co-ownership as key considerations in chatbot-based privacy regulation. The study offers design recommendations for developing privacy-sensitive chatbot systems and contributes to the growing body of research on privacy in AI-driven customer service.
Paper Number
1593
Recommended Citation
Javadi, Shirin; Kordzadeh, Nima; and Garcia, Rosanna, "Negotiating Privacy in Chatbots: Managing Boundaries and User Control Through a Communication Privacy Management Perspective" (2025). AMCIS 2025 Proceedings. 22.
https://aisel.aisnet.org/amcis2025/sigadit/sigadit/22
Negotiating Privacy in Chatbots: Managing Boundaries and User Control Through a Communication Privacy Management Perspective
As customer service chatbots become more prevalent, privacy concerns increasingly influence user trust and disclosure behavior. Guided by Communication Privacy Management (CPM) theory, this study explores how users regulate privacy in chatbot interactions. Thematic analysis of semi-structured interviews reveals five themes: privacy boundaries, collective boundaries, permeability rules, ownership and control, and linkage rules and privacy turbulence. Users manage privacy based on brand reputation, past experiences, data sensitivity, and awareness of data access. Design features like anonymity, data deletion, and customizable settings foster control, while unclear data practices and third-party sharing trigger disengagement. This study extends CPM theory to AI-mediated contexts by showing how system-level cues shape privacy management. It also introduces perceived system control and system-assumed co-ownership as key considerations in chatbot-based privacy regulation. The study offers design recommendations for developing privacy-sensitive chatbot systems and contributes to the growing body of research on privacy in AI-driven customer service.
When commenting on articles, please be friendly, welcoming, respectful and abide by the AIS eLibrary Discussion Thread Code of Conduct posted here.
Comments
SIGADIT