Paper Type
ERF
Abstract
As generative artificial intelligence (GAI) becomes increasingly prevalent, concerns regarding information disclosure have garnered considerable attention. In this research-in-progress paper, expand on existing theories of privacy disclosure by introducing eye tracking measures and an experiment which will investigate the effectiveness of an informational privacy nudging technique. We hypothesize that privacy nudges will moderate the relationship between privacy calculus and varieties of privacy fatigue. We introduce an expanded conceptual model and describe an eye tracking experiment that can validate it. This research will also offer insights into whether past models of privacy disclosure generalize in the new context of effective and safer GAI design.
Paper Number
1740
Recommended Citation
Fereidoonian, Shirin and Conrad, Colin, "How Privacy Calculus Drives Fatigue-Induced Disclosure in Generative AI" (2025). AMCIS 2025 Proceedings. 42.
https://aisel.aisnet.org/amcis2025/sigadit/sigadit/42
How Privacy Calculus Drives Fatigue-Induced Disclosure in Generative AI
As generative artificial intelligence (GAI) becomes increasingly prevalent, concerns regarding information disclosure have garnered considerable attention. In this research-in-progress paper, expand on existing theories of privacy disclosure by introducing eye tracking measures and an experiment which will investigate the effectiveness of an informational privacy nudging technique. We hypothesize that privacy nudges will moderate the relationship between privacy calculus and varieties of privacy fatigue. We introduce an expanded conceptual model and describe an eye tracking experiment that can validate it. This research will also offer insights into whether past models of privacy disclosure generalize in the new context of effective and safer GAI design.
When commenting on articles, please be friendly, welcoming, respectful and abide by the AIS eLibrary Discussion Thread Code of Conduct posted here.
Comments
SIGADIT