Computer-Adaptive Surveys (CAS) are multi-dimensional instruments where questions asked of respondents depend on the previous questions asked. Due to the complexity of CAS, little work has been done on developing methods for validating their content and construct validity. We have created a new q-sorting technique where the hierarchies that independent raters develop are transformed into a quan-titative form, and that quantitative form is tested to determine the inter-rater reliability of the individualbranches in the hierarchy. The hierarchies are then successively transformed to test if they branch inthe same way. The objective of this paper is to identify suitable measures and a “good enough” thresh-old for demonstrating the similarity of two CAS trees. To find suitable measures, we perform a set ofbootstrap simulations to measure how various statistics change as a hypothetical CAS deviates from a“true” version. We find that the 3 measures of association, Goodman and Kruskal’s Lambda, Cohen’sKappa, and Goodman and Kruskal’s Gamma together provide information useful for assessing con-struct validity in CAS. In future work we are interested in both finding a “good enough” threshold(s)for assessing the overall similarity between tree hierarchies and diagnosing causes of disagreementsbetween the tree hierarchies.
Sabbaghan, Sahar; Gardner, Lesley; and Chua, Cecil Eng Huang, (2017). "A THRESHOLD FOR A Q-SORTING METHODOLOGY FOR COMPUTER-ADAPTIVE SURVEYS". In Proceedings of the 25th European Conference on Information Systems (ECIS), Guimarães, Portugal, June 5-10, 2017 (pp. 2896-2906). ISBN 978-0-9915567-0-0 Research-in-Progress Papers.