Loading...
Paper Type
Complete
Description
Crowdsourcing platforms enable requesters to elicit information from thousands of workers worldwide. A question that arises when outsourcing a task to a crowd is how many crowd workers a requester should hire. Focusing on forecasting tasks, we provide a methodological way of answering that question by formally describing how the number of crowd workers relates to the expected forecast error. Specifically, we discuss how an error curve associates a number of hired crowd workers with a fixed and a variable error. We then suggest an optimality concept that enables requesters to select an optimal number of crowd workers based on the tolerable variable error. In an illustrative study on Amazon Mechanical Turk, we demonstrate how highly prescriptive our approach is. In particular, we show how requesters can perform ex-ante and ex-post analyses by determining the optimal number of workers before running a task and by determining the fixed error after running the task and collecting forecasts.
Paper Number
1087
Recommended Citation
Carvalho, Arthur and Karimi, Majid, "Selecting the Optimal Number of Crowd Workers for Forecasting Tasks" (2023). AMCIS 2023 Proceedings. 1.
https://aisel.aisnet.org/amcis2023/sig_sourcing/sig_sourcing/1
Selecting the Optimal Number of Crowd Workers for Forecasting Tasks
Crowdsourcing platforms enable requesters to elicit information from thousands of workers worldwide. A question that arises when outsourcing a task to a crowd is how many crowd workers a requester should hire. Focusing on forecasting tasks, we provide a methodological way of answering that question by formally describing how the number of crowd workers relates to the expected forecast error. Specifically, we discuss how an error curve associates a number of hired crowd workers with a fixed and a variable error. We then suggest an optimality concept that enables requesters to select an optimal number of crowd workers based on the tolerable variable error. In an illustrative study on Amazon Mechanical Turk, we demonstrate how highly prescriptive our approach is. In particular, we show how requesters can perform ex-ante and ex-post analyses by determining the optimal number of workers before running a task and by determining the fixed error after running the task and collecting forecasts.
When commenting on articles, please be friendly, welcoming, respectful and abide by the AIS eLibrary Discussion Thread Code of Conduct posted here.
Comments
SIG Sourcing