•  
  •  
 
Journal of the Association for Information Systems

Abstract

This paper reports an empirical study intended to provide detailed comparisons amongst and between the varieties of available measures of computer self-efficacy (CSE). Our purpose is to ascertain their relative abilities to isolate the CSE construct from other related constructs and to capture variance in performance attributed to changes in CSE level. In addition, we investigate the importance of ensuring the measure being used is sufficiently aligned with the task domain of interest. Finally, we explore the stability of CSE measures as they relate to the current state of evolution within the computing domain. Marakas, Yi, and Johnson (1998) proposed a framework for the construction of instruments intended to measure the CSE construct that we have adopted as a basis for this series of investigations. To that end, we advance and test a set of hypotheses derived from the Marakas et al. (1998) framework. Results of the analyses support the need for adherence to the tenets of the proposed framework as well as provide evidence that CSE measures suffer from degradation of their explanatory power over time. Further, this study brings forth the importance of appropriately validating measures of CSE using approaches intended for a formative rather than a reflective construct. These results suggest that the common practices of instrument validation and reuse of long-standing instruments to measure CSE may not be the most effective approach to the study of the construct. Implications for future research are discussed.

DOI

10.17705/1jais.00112

Share

COinS
 

When commenting on articles, please be friendly, welcoming, respectful and abide by the AIS eLibrary Discussion Thread Code of Conduct posted here.