Abstract

Crowdsourcing has experienced increasing popularity in recent years. While performance-based issues, such as the quantity or quality of output produced by the crowd, have been in the focus of research, users’ experience, which unfolds through interaction with the crowdsourcing platform and ultimately creates engagement, has been largely neglected. However, user engagement does not only determine the scope of effort users put into the crowdsourcing task, but is considered a determinant for future participation. This paper focusses on the role of task representation–manifested in mechanisms for crowd-based idea evaluation–as potential stimuli for user engagement. Therefore, we conduct a web-based experiment with 198 participants to investigate how different task representations translate into differences in users’ experience and their engagement. In particular, we analyze two distinctive task representations: sequential judgement tasks in form of multi-criteria rating scales and simultaneous choice tasks in the form of enterprise crowdfunding. We find differences in task representation to influence user engagement while mediated by a user’s perceived cognitive load. Moreover, our findings indicate that user engagement is determined by a user’s perceived meaningfulness of a task. These results enhance our understanding of user engagement in crowdsourcing and contribute to theory building in this emerging field.

Share

COinS