Sharing Economy, Platforms and Crowds
Loading...
Paper Number
1725
Paper Type
Completed
Description
Firms are increasingly adopting crowdsourcing contests to acquire innovative solutions to challenging problems. As problems become increasingly complex, no individual may have the full range of requisite knowledge to develop an effective solution. There is a paucity of theory on the process that combines contestants’ diverse expertise via teaming. In this paper, we systematically explore: a) with whom to team up; b) when and how contestants should form teams; and c) the outcome of strategic teaming to develop a comprehensive theory from a (re)combination perspective. Using simulation experiments and empirical validation, we find that collaboration among contestants with different expertise increases team performance albeit conditionally depending on the extent of knowledge overlap between contestants and timing of team formation. More interestingly, there is a misalignment between contestant-level and platform-level outcomes. These findings provide new insights on contestant performance and crowdsourcing quality and have implications for the design of crowdsourcing platforms.
Recommended Citation
Zhou, Junjie and Hahn, Jungpil, "Making the Crowd Wiser: (Re)combination through Teaming in Crowdsourcing" (2021). ICIS 2021 Proceedings. 8.
https://aisel.aisnet.org/icis2021/sharing_econ/sharing_econ/8
Making the Crowd Wiser: (Re)combination through Teaming in Crowdsourcing
Firms are increasingly adopting crowdsourcing contests to acquire innovative solutions to challenging problems. As problems become increasingly complex, no individual may have the full range of requisite knowledge to develop an effective solution. There is a paucity of theory on the process that combines contestants’ diverse expertise via teaming. In this paper, we systematically explore: a) with whom to team up; b) when and how contestants should form teams; and c) the outcome of strategic teaming to develop a comprehensive theory from a (re)combination perspective. Using simulation experiments and empirical validation, we find that collaboration among contestants with different expertise increases team performance albeit conditionally depending on the extent of knowledge overlap between contestants and timing of team formation. More interestingly, there is a misalignment between contestant-level and platform-level outcomes. These findings provide new insights on contestant performance and crowdsourcing quality and have implications for the design of crowdsourcing platforms.
When commenting on articles, please be friendly, welcoming, respectful and abide by the AIS eLibrary Discussion Thread Code of Conduct posted here.
Comments
09-Crowds