Abstract
Autonomous AI agents now execute tasks on behalf of users, often requiring access to personal information, bringing privacy concerns to the forefront. Grounded in privacy calculus theory, this exploratory study examines when and why people are willing to disclose private information to such agents. We propose a two-phase mixed design. The first phase employs think-aloud interactions and semi-structured interviews to explore user vocabularies, boundary rules, and perceptions of benefits, risks, and control. The second phase combines an initial survey with a real-world agent-mediated task and a brief post-survey to quantify users’ willingness to disclose in low- and high-sensitivity domains. This combined approach yields a taxonomy to guide measurement, an instrument calibrated to user language, and a behavioral task linking intentions to observed choices. This provides an empirical foundation for agent designs and strategies that can enhance trust without compromising utility.
Recommended Citation
Li, Haisen and Menard, Philip, "Balancing Value and Control: Understanding Privacy Disclosure to Autonomous AI Agents through an Integrated Theoretical Lens" (2025). WISP 2025 Proceedings. 29.
https://aisel.aisnet.org/wisp2025/29