•  
  •  
 

AIS Transactions on Human-Computer Interaction

Abstract

Human-computer interaction researchers have long used survey methodologies. However, debate remains about the potential for participants to provide biased responses to subsequent items based on previously viewed items. In this research, we investigate the effects of survey item ordering that researchers have not studied previously. Grounded in previous exploratory item-ordering studies using an HCI online survey, we investigate bias in more detail. In addition, we use an adult sample population so that we can extend our results more broadly as compared to previous research. We employed two distinct randomizing survey approaches: 1) complete item randomization for each respondent (random), which presents items to each respondent in a completely randomized order; and 2) partially individualized item randomization (grouped), which presents constructs in the same order in a survey but randomizes items in each construct for each respondent. Our results suggest researchers should use fully randomized survey instruments in HCI research whenever possible since grouped ordering of any kind increases bias and statistical inflation, which can influence results’ veracity. Additionally, we did not appear to find any significant increase in the participants’ frustration or fatigue to be associated with the random treatment.

DOI

10.17705/1thci.00128

Share

COinS
 

When commenting on articles, please be friendly, welcoming, respectful and abide by the AIS eLibrary Discussion Thread Code of Conduct posted here.