Appropriate evaluation of information systems research papers ensures that our institutions and review processes stay viable. In the short run, we typically assess research value through research awards, while, in the longer term, we typically assess research value based on how the research community sees and draws from particular published research papers. In this study, we examine the consistency between two metrics for assessing research value: research awards and citations. To do so, we focus on a premier journal, MIS Quarterly. We found that rarely are the “papers of the year” the ones cited the most. We offer possible explanations for this discrepancy based on assessing papers’ originality and utility and their citation patterns.