Paper Type

ERF

Abstract

As AI-driven recommender systems become more persuasive, users are increasingly confronted with a trade-off between personalization and privacy. This study investigates how different explanation types in recommender systems influence users’ privacy concerns and trust, using both self-reported data and eye-tracking metrics. Drawing on the privacy calculus framework and trust theory, we employ a 2x5 factorial experiment involving 120 university students who interact with a course recommender system. The study introduces eye-tracking as a novel lens to assess cognitive responses to privacy-sensitive content within explanations. Our findings are expected to reveal how visual attention to personalized details correlates with privacy concerns and trust, thereby informing the design of transparent yet privacy-aware recommender systems. This research contributes to theory by extending the privacy calculus with cognitive measures and provides practical guidelines for explanation design in AI systems.

Paper Number

1333

Author Connect URL

https://authorconnect.aisnet.org/conferences/AMCIS2025/papers/1333

Comments

SIGHCI

Author Connect Link

Share

COinS
 
Aug 15th, 12:00 AM

When Recommendations Know You Too Well: Explanation Types, Privacy Concerns, and Eye-tracking Evidence in Personalized Systems

As AI-driven recommender systems become more persuasive, users are increasingly confronted with a trade-off between personalization and privacy. This study investigates how different explanation types in recommender systems influence users’ privacy concerns and trust, using both self-reported data and eye-tracking metrics. Drawing on the privacy calculus framework and trust theory, we employ a 2x5 factorial experiment involving 120 university students who interact with a course recommender system. The study introduces eye-tracking as a novel lens to assess cognitive responses to privacy-sensitive content within explanations. Our findings are expected to reveal how visual attention to personalized details correlates with privacy concerns and trust, thereby informing the design of transparent yet privacy-aware recommender systems. This research contributes to theory by extending the privacy calculus with cognitive measures and provides practical guidelines for explanation design in AI systems.

When commenting on articles, please be friendly, welcoming, respectful and abide by the AIS eLibrary Discussion Thread Code of Conduct posted here.