Abstract
Crowdsourcing has experienced increasing popularity in recent years. While performance-based issues, such as the quantity or quality of output produced by the crowd, have been in the focus of research, users’ experience, which unfolds through interaction with the crowdsourcing platform and ultimately creates engagement, has been largely neglected. However, user engagement does not only determine the scope of effort users put into the crowdsourcing task, but is considered a determinant for future participation. This paper focusses on the role of task representation–manifested in mechanisms for crowd-based idea evaluation–as potential stimuli for user engagement. Therefore, we conduct a web-based experiment with 198 participants to investigate how different task representations translate into differences in users’ experience and their engagement. In particular, we analyze two distinctive task representations: sequential judgement tasks in form of multi-criteria rating scales and simultaneous choice tasks in the form of enterprise crowdfunding. We find differences in task representation to influence user engagement while mediated by a user’s perceived cognitive load. Moreover, our findings indicate that user engagement is determined by a user’s perceived meaningfulness of a task. These results enhance our understanding of user engagement in crowdsourcing and contribute to theory building in this emerging field.
Recommended Citation
Benz, Carina; Zierau, Naim; and Satzger, Gerhard, (2019). "NOT ALL TASKS ARE ALIKE: EXPLORING THE EFFECT OF TASK REPRESENTATION ON USER ENGAGEMENT IN CROWD-BASED IDEA EVALUATION". In Proceedings of the 27th European Conference on Information Systems (ECIS), Stockholm & Uppsala, Sweden, June 8-14, 2019. ISBN 978-1-7336325-0-8 Research Papers.
https://aisel.aisnet.org/ecis2019_rp/59