Over 10 years ago, the issue of whether IS researchers were rigorously validating their quantitative, positivist instruments was raised (Straub 1989). In the years that have passed since that time, the profession has undergone many changes. Novel technologies and management trends have come and gone. New professional societies have been formed and grown in prominence and new demands have been placed on the field’s research and teaching obligations. But the issue of rigor in IS research has persisted throughout all such changes. Without solid validation of the instruments that are used to gather data upon which findings and interpretations are based, the very scientific basis of positivist, quantitative research is threatened. As a retrospective on the Straub article, this research seeks to determine if and how the field has advanced in instrument validation. As evidence of the change, we coded positivist, quantitative research articles in five major journals over a recent three year period for use of validation techniques. Findings suggest that the field has advanced in many areas, but, overall, it appears that a majority of published studies are still not sufficiently validating their instruments. Based on these findings, approaches are suggested for reinvigorating the quest for validation in IS research via content/construct validity, reliability, and manipulation validity.