•  
  •  
 
Journal of the Association for Information Systems

Abstract

In a recent issue of the Journal of the Association for Information Systems, Marakas, Johnson, and Clay (2007) presented an interesting and important discussion on formative versus reflective measurement, specifically related to the measurement of the computer self-efficacy (CSE) construct. However, we believe their recommendation to measure CSE constructs using formative indicators merits additional dialogue before being adopted by researchers. In the current study we discuss why the substantive theory underlying the CSE construct suggests that it is best measured using reflective indicators. We then provide empirical evidence demonstrating how the misspecification of existing CSE measures as formative can result in unstable estimates across varying endogenous variables and research contexts. Specifically, we demonstrate how formative indicator weights are dependent on the endogenous variable used to estimate them. Given that the strength of formative indicator weights is one metric used for determining indicator retention, and adding or dropping formative indicators can result in changes in the conceptual meaning of a construct, the use of formative measurement can result in the retention of different indicators and ultimately the measurement of different concepts across studies. As a result, the comparison of findings across studies over time becomes conceptually problematic and compromises our ability to replicate and extend research in a particular domain. We discuss not only the consequences of using formative versus reflective measures in CSE research but also the broader implications this choice has on research in other domains.

DOI

10.17705/1jais.00170

Share

COinS
 

When commenting on articles, please be friendly, welcoming, respectful and abide by the AIS eLibrary Discussion Thread Code of Conduct posted here.