Abstract

Recent data breaches at online content and service providers (CSPs) such as Facebook or Uber illustrate the privacy risks associated with the disclosure of personal data. Yet, asymmetric information between users and CSPs makes it difficult for users to assess their privacy risks. Thus, in order to reduce uncertainty and assist users with increasingly complex privacy trade-offs, regulators and consumer protection agencies advise CSPs to be more transparent about their data collection, storage and use. In this context, Information Systems research has largely focused on the effectiveness of transparency measures in specific application scenarios (e.g. recommender systems, targeted advertising) by exogenously assigning subjects to scenarios with or without transparency. However, it is unclear whether users would actively choose a more transparent over a less transparent CSP, as they may prefer ambiguity regarding privacy risks and information avoidance. To advance research in this area, this paper presents an experimental design to study subjects’ preferences for transparency in a controlled laboratory environment. Drawing on the field of decision analysis and established theories on uncertainty and ambiguity attitudes, the present study contributes to a better understanding of human privacy decision-making.

Share

COinS