Abstract

This study examines the consequences of AI proactivity on user trust and resistance to AI adoption, with a specific focus on the moderating role of data privacy concerns. Drawing on Reactance Theory, we propose that while AI proactivity is designed to enhance efficiency and convenience, it can inadvertently erode user trust due to perceptions of intrusiveness and threats to autonomy. Trust, in turn, mediates the relationship between AI proactivity and resistance, as diminished trust leads to greater skepticism and reluctance to adopt AI systems. Additionally, we hypothesize that data privacy concerns positively moderate the negative relationship between AI proactivity and trust, such that the adverse effects are more pronounced under higher perceived privacy risks. To test these relationships, we will use an experimental design. The study can contribute to the literature on human-AI interaction by highlighting the importance of trust in mitigating resistance to AI adoption and offers practical insights for designing AI systems that balance proactivity with user autonomy and privacy expectations. Additionally, it provides a valuable connection to big data discussions by exploring how data usage influences user perceptions and behavior.

Share

COinS