Management Information Systems Quarterly
Abstract
An outcome of the rising popularity of crowdsourcing contest platforms has been an increase in the number of tasks available on a platform at the same time. These concurrent tasks compete for the attention of the same pool of solvers. There is some evidence that an increase in the number of competing tasks can reduce the number of solvers participating in a focal task. However, it is not clear exactly how competing tasks influence the quality of tasks, as represented by the best solutions they receive. Using data collected from a popular crowdsourcing platform, we investigate the effects of competing tasks and uncover important underlying mechanisms governing the relationship between competing tasks and task quality. Through analyses at the task, solver, and submission levels, we establish that competing tasks can influence task quality in two ways. First, the number of solvers in each task decreases when there are more competing tasks, and this reduction negatively impacts task quality. Second, the availability of a greater number of competing tasks leads to solvers participating in more tasks simultaneously. This multitasking in turn allows solvers to learn from other tasks, generate better solutions, and improve task quality. Interestingly, we find that these opposing forces offset each other, and that the overall solution quality of the focal task is not affected, despite the fact that an increase in the number of competing tasks reduces the number of solvers participating it.