•  
  •  
 
Communications of the Association for Information Systems

Abstract

Online survey applications offer various options for administering items, such as approaches that completely or partially randomize the order in which they present items to each subject. Vendors claim individual randomization eliminates key sources of method bias that can impact reproducibility. However, little empirical evidence exists to directly support this claim, and it is difficult to evaluate based on existing research because such research has underreported item-ordering methodologies and the reporting that does occur frequently lacks clarity. In this paper, we investigate the effect that item ordering has on reproducibility in IS online survey research via comprehensively comparing five prominent item-ordering approaches: 1) individually randomized, 2) static grouped by construct, 3) static intermixed, 4) individually randomized grouped-by-construct blocks containing static items, and 5) static grouped-by-construct blocks containing individually randomized items. We found significant, overarching differences among these approaches that can threaten research findings’ reproducibility. These differences appear across the measures we studied, which included item and construct means, reliability and construct validity statistics, serial effects, and subjects’ fatigue and frustration that resulted from the survey-taking process. Our findings support a call for several key changes in how researchers report and use item-ordering approaches that pertain particularly to IS online survey research.

DOI

10.17705/1CAIS.04940

Share

COinS
 

When commenting on articles, please be friendly, welcoming, respectful and abide by the AIS eLibrary Discussion Thread Code of Conduct posted here.