With the increase in the use of algorithms in decision making in different spheres of life, the need for “fairness” in these machine learning (ML) algorithms has become a progressively important concern. However, any remedy to bias must start with awareness that bias exists as it helps guide toward a solution. Therefore, this work is an empirical study that aims to uncover data and subsequent algorithmic bias in one of such ML applications. Specifically, we consider recommender systems (RS), one of the most successful applications of ML technology in practice today, and answer three main questions: (1) Does bias exist in RS input data? (2) Does the use of RS promote diversity of recommendation list or reinforces existing data bias? and (3) Does the application of RS over the years improve the recommendation diversity and reduce bias? We use Cell Phones and Accessories purchase dataset from Amazon for our empirical study. We employ the Long Tail phenomenon and quantify the shape of purchase distribution by calculating Gini coefficient from the Lorenz curve to determine the presence of bias. In the end, our findings from our experiments showed the presence of bias in both data and the RS algorithm year-over-year.