Paper Type
ERF
Abstract
As AI-driven recommender systems become more persuasive, users are increasingly confronted with a trade-off between personalization and privacy. This study investigates how different explanation types in recommender systems influence users’ privacy concerns and trust, using both self-reported data and eye-tracking metrics. Drawing on the privacy calculus framework and trust theory, we employ a 2x5 factorial experiment involving 120 university students who interact with a course recommender system. The study introduces eye-tracking as a novel lens to assess cognitive responses to privacy-sensitive content within explanations. Our findings are expected to reveal how visual attention to personalized details correlates with privacy concerns and trust, thereby informing the design of transparent yet privacy-aware recommender systems. This research contributes to theory by extending the privacy calculus with cognitive measures and provides practical guidelines for explanation design in AI systems.
Paper Number
1333
Recommended Citation
Kim, Zisu and Bauman, Konstantin, "When Recommendations Know You Too Well: Explanation Types, Privacy Concerns, and Eye-tracking Evidence in Personalized Systems" (2025). AMCIS 2025 Proceedings. 17.
https://aisel.aisnet.org/amcis2025/sig_hci/sig_hci/17
When Recommendations Know You Too Well: Explanation Types, Privacy Concerns, and Eye-tracking Evidence in Personalized Systems
As AI-driven recommender systems become more persuasive, users are increasingly confronted with a trade-off between personalization and privacy. This study investigates how different explanation types in recommender systems influence users’ privacy concerns and trust, using both self-reported data and eye-tracking metrics. Drawing on the privacy calculus framework and trust theory, we employ a 2x5 factorial experiment involving 120 university students who interact with a course recommender system. The study introduces eye-tracking as a novel lens to assess cognitive responses to privacy-sensitive content within explanations. Our findings are expected to reveal how visual attention to personalized details correlates with privacy concerns and trust, thereby informing the design of transparent yet privacy-aware recommender systems. This research contributes to theory by extending the privacy calculus with cognitive measures and provides practical guidelines for explanation design in AI systems.
When commenting on articles, please be friendly, welcoming, respectful and abide by the AIS eLibrary Discussion Thread Code of Conduct posted here.
Comments
SIGHCI