Abstract

Recommendation systems automate most of our decision process to facilitate a final decision: They learn from our past behavior, filter our choices, and present a subset of alternatives to us. Consequently, organizations have paid much attention to refine the accuracy of recommendations to match users’ needs. However, increasing evidence and research calls warn against unilaterally focusing on the system without considering the users’ trade-offs. Simply choosing from a curated set of options might deprive users from a thorough understanding of their preferences; or even deny them the unexpected discoveries resulting from their own decision efforts. We expect to learn how users perceive the recommendation system to understand recommendations–personalization transparency–and how their decision-making orientation affect their choice of unfamiliar recommendations–regulatory focus. We propose two studies to fill these gaps. First, we will further explore other factors affecting users’ perceptions of the recommendation process by interviewing and observing people using Netflix. Using a confirmatory controlled experiment, we will validate our resulting model which, for now, hypothesizes the interaction between the above constructs to enhance users’ adherence to recommendation. The spirit of this research is our strong expectation that recommendation systems will enjoy stronger acceptance if designed to reciprocate the faith users put in them, by compensating users for this loss of decision-making. More generally, we hope to contribute to our initial understanding of why we are willing to delegate daily decision-making tasks to intelligent services, and allow them to take greater control of our decisions.

Abstract Only

Share

COinS