Social networks – whether public or in enterprises – regularly ask users to rate their peers’ content using different voting techniques. When employed in innovation challenges, these rating procedures are part of an open, interactive, and continuous engagement among customers, employees, or citizens. In this regard, assessment accuracy (i.e., correctly identifying good and bad ideas) in crowdsourced eval-uation processes may be influenced by the display of peer ratings. While it could sometimes be useful for users to follow their peers, it is not entirely clear under which circumstances this actually holds true. Thus, in this research-in-progress article, we propose a study design to systematically investigate the effect of peer ratings on assessment accuracy in crowdsourced idea evaluation processes. Based on the elaboration likelihood model and social psychology, we develop a research model that incorporates the mediating factors extraversion, locus of control, as well as peer rating quality (i.e., the ratings’ corre-lation with the evaluated content’s actual quality). We suggest that the availability of peer ratings de-creases assessment accuracy and that rating quality, extraversion, as well as an internal locus of control mitigate this effect.
Wagenknecht, Thomas; Teubner, Timm; and Weinhardt, Christof, (2017). "PEER RATINGS AND ASSESSMENT QUALITY IN CROWD-BASED INNOVATION PROCESSES". In Proceedings of the 25th European Conference on Information Systems (ECIS), Guimarães, Portugal, June 5-10, 2017 (pp. 3144-3154). ISBN 978-0-9915567-0-0 Research-in-Progress Papers.