Paper Type
Complete
Paper Number
1199
Description
This study investigates the influence of toolkit support on algorithm innovation in innovation contest platforms. Amidst the rapid growth of AI and escalating demand for innovative algorithms, toolkits—comprising pre-trained models, modular deployment, and collaboration features—are embraced by innovation platforms with expectations of augmenting algorithm innovation. However, the impact of such support on algorithm innovation performance remains ambiguous, necessitating a detailed examination. Leveraging a natural experiment on Kaggle.com, this research employs the Regression Discontinuity in Time to assess causal effects of toolkit support on innovator performance. Results reveal that toolkit support significantly enhances innovation performance, but the benefits vary across different user groups and contest conditions. Specifically, more experienced innovators reap significant enhancements, while the effects amplify in highly competitive environments and diminish in contests with greater task complexity. This study contributes to the understanding of platform-based features in innovation, emphasizing the complexity of toolkit support within algorithm contests.
Recommended Citation
Qiu, Shouxiang and Zhao, Ling, "Can Toolkit Support Truly Elevate Algorithm Innovation? Unpacking the Impact through a Natural Experiment in an Innovation Contest Platform" (2024). PACIS 2024 Proceedings. 5.
https://aisel.aisnet.org/pacis2024/track06_dpe/track06_dpe/5
Can Toolkit Support Truly Elevate Algorithm Innovation? Unpacking the Impact through a Natural Experiment in an Innovation Contest Platform
This study investigates the influence of toolkit support on algorithm innovation in innovation contest platforms. Amidst the rapid growth of AI and escalating demand for innovative algorithms, toolkits—comprising pre-trained models, modular deployment, and collaboration features—are embraced by innovation platforms with expectations of augmenting algorithm innovation. However, the impact of such support on algorithm innovation performance remains ambiguous, necessitating a detailed examination. Leveraging a natural experiment on Kaggle.com, this research employs the Regression Discontinuity in Time to assess causal effects of toolkit support on innovator performance. Results reveal that toolkit support significantly enhances innovation performance, but the benefits vary across different user groups and contest conditions. Specifically, more experienced innovators reap significant enhancements, while the effects amplify in highly competitive environments and diminish in contests with greater task complexity. This study contributes to the understanding of platform-based features in innovation, emphasizing the complexity of toolkit support within algorithm contests.
When commenting on articles, please be friendly, welcoming, respectful and abide by the AIS eLibrary Discussion Thread Code of Conduct posted here.
Comments
Platforms