Track
Virtual Communities and Collaborations
Abstract
Crowds can be used to generate and evaluate design solutions. To increase a crowdsourcing system’s effectiveness, we propose and compare two evaluation methods, one using five-point Likert scale rating and the other prediction voting. Our results indicate that although the two evaluation methods correlate, they have different goals: whereas prediction voting focuses evaluators on identifying the very best solutions, the rating focuses evaluators on the entire range of solutions. Thus, prediction voting is appropriate when there are many poor quality solutions that need to be filtered out, and rating is suited when all ideas are reasonable and distinctions need to be made across all solutions. The crowd prefers participating in prediction voting. The results have pragmatic implications, suggesting that evaluation methods should be assigned in relation to the distribution of quality present at each stage of crowdsourcing.
Recommended Citation
Bao, Jin; Sakamoto, Yasuaki; and Nickerson, Jeffrey V., "Evaluating Design Solutions Using Crowds" (2011). AMCIS 2011 Proceedings - All Submissions. 446.
https://aisel.aisnet.org/amcis2011_submissions/446