•  
  •  
 
Communications of the Association for Information Systems

Abstract

Empirical research in information systems relies heavily on developing and validating survey instruments. However, researchers’ efforts to evaluate content validity of survey scales are often inconsistent, incomplete, or unreported. Thjs paper defines and describes the most significant facets of content validity and illustrates the mechanisms through which multi-item psychometric scales capture a latent construct’s content. We discuss competing methods and propose new methods to assemble a comprehensive set of metrics and methods to evaluate content validity. The resulting recommendations for researchers evaluating content validity emphasize an iterative pre-study process (wash, rinse, and repeat until clean) to objectively establish “fit for purpose” when developing and adapting survey scales. A sample pre-study demonstrates suitable methods for creating confidence that scales reliably capture the theoretical essence of latent constructs. We demonstrate the efficacy of these methods using a randomized field experiment.

DOI

10.17705/1CAIS.04736

Share

COinS
 

When commenting on articles, please be friendly, welcoming, respectful and abide by the AIS eLibrary Discussion Thread Code of Conduct posted here.