Abstract
Technical advances in machine learning (ML) and artificial intelligence (AI) are shaping the transformation in organisations, society and research. Yet, adoption lags behind as implementation is costly and requires experts which are scarce on the market. Automated ML (autoML) promises to overcome these barriers and help to democratize ML by empowering domain specialists to develop ML models in an easy and cheap way. However, the usage of autoML by non-experts in science raises concerns regarding reproducibility, undermining research credibility. This paper examines the extent to which users without in-depth ML knowledge are supported by no-code autoML tools in ensuring research reproducibility. The results of this study uncover human-related and tool-related opportunities and challenges. Addressing these requires a multifaceted design-oriented approach that incorporates open science principles. In this way, the full potential of no-code autoML tools can be realized while ensuring reproducibility and ultimately the credibility of research.
Recommended Citation
Pletzl, Sabrina; Haberl, Armin; Ross-Hellauer, Tony; and Thalmann, Stefan, "Reproducible AutoML: An Assessment of Research Reproducibility of No-Code AutoML Tools" (2024). Wirtschaftsinformatik 2024 Proceedings. 95.
https://aisel.aisnet.org/wi2024/95