Paper Type

ERF

Abstract

As generative artificial intelligence (GAI) becomes increasingly prevalent, concerns regarding information disclosure have garnered considerable attention. In this research-in-progress paper, expand on existing theories of privacy disclosure by introducing eye tracking measures and an experiment which will investigate the effectiveness of an informational privacy nudging technique. We hypothesize that privacy nudges will moderate the relationship between privacy calculus and varieties of privacy fatigue. We introduce an expanded conceptual model and describe an eye tracking experiment that can validate it. This research will also offer insights into whether past models of privacy disclosure generalize in the new context of effective and safer GAI design.

Paper Number

1740

Author Connect URL

https://authorconnect.aisnet.org/conferences/AMCIS2025/papers/1740

Comments

SIGADIT

Author Connect Link

Share

COinS
 
Aug 15th, 12:00 AM

How Privacy Calculus Drives Fatigue-Induced Disclosure in Generative AI

As generative artificial intelligence (GAI) becomes increasingly prevalent, concerns regarding information disclosure have garnered considerable attention. In this research-in-progress paper, expand on existing theories of privacy disclosure by introducing eye tracking measures and an experiment which will investigate the effectiveness of an informational privacy nudging technique. We hypothesize that privacy nudges will moderate the relationship between privacy calculus and varieties of privacy fatigue. We introduce an expanded conceptual model and describe an eye tracking experiment that can validate it. This research will also offer insights into whether past models of privacy disclosure generalize in the new context of effective and safer GAI design.

When commenting on articles, please be friendly, welcoming, respectful and abide by the AIS eLibrary Discussion Thread Code of Conduct posted here.