Location
Hilton Hawaiian Village, Honolulu, Hawaii
Event Website
https://hicss.hawaii.edu/
Start Date
3-1-2024 12:00 AM
End Date
6-1-2024 12:00 AM
Description
The surge in Artificial Intelligence (AI) research has spurred significant breakthroughs across various fields. However, AI is known for its Black Box character and reproducing AI outcomes challenging. Open Science, emphasizing transparency, reproducibility, and accessibility, is crucial in this context, ensuring research validity and facilitating practical AI adoption. We propose a framework to assess the quality of AI documentation and assess 51 papers. We conclude that despite guidelines, many AI papers fall short on reproducibility due to insufficient documentation. It is crucial to provide comprehensive details on training data, source code, and AI models, and for reviewers and editors to strictly enforce reproducibility guidelines. A dearth of detailed methods or inaccessible source code and models can raise questions about the authenticity of certain AI innovations, potentially impeding their scientific value and their adoption. Although our sample size inhibits broad generalization, it nonetheless offers key insights on enhancing AI research reproducibility.
Recommended Citation
Koenigstorfer, Florian; Haberl, Armin; Kowald, Dominik; Ross-Hellauer, Tony; and Thalmann, Stefan, "Black Box or Open Science? Assessing Reproducibility-Related Documentation in AI Research" (2024). Hawaii International Conference on System Sciences 2024 (HICSS-57). 3.
https://aisel.aisnet.org/hicss-57/cl/open_science_practices/3
Black Box or Open Science? Assessing Reproducibility-Related Documentation in AI Research
Hilton Hawaiian Village, Honolulu, Hawaii
The surge in Artificial Intelligence (AI) research has spurred significant breakthroughs across various fields. However, AI is known for its Black Box character and reproducing AI outcomes challenging. Open Science, emphasizing transparency, reproducibility, and accessibility, is crucial in this context, ensuring research validity and facilitating practical AI adoption. We propose a framework to assess the quality of AI documentation and assess 51 papers. We conclude that despite guidelines, many AI papers fall short on reproducibility due to insufficient documentation. It is crucial to provide comprehensive details on training data, source code, and AI models, and for reviewers and editors to strictly enforce reproducibility guidelines. A dearth of detailed methods or inaccessible source code and models can raise questions about the authenticity of certain AI innovations, potentially impeding their scientific value and their adoption. Although our sample size inhibits broad generalization, it nonetheless offers key insights on enhancing AI research reproducibility.
https://aisel.aisnet.org/hicss-57/cl/open_science_practices/3