Abstract
The anonymized peer review process, fundamental to maintaining the quality of scientific publications, has been both praised and criticized. Concerns over various biases, including those related to authors' affiliations, gender, and groundbreaking ideas, have led to calls for critical reflection. In addition, scientific output and hence the number of necessary reviews has increased tremendously. In that context, machine learning models, and large language models such as ChatGPT more specifically, have been explored as potential solutions to enhance the reviewing process. Yet, AI can be biased itself. Thus, a systematic approach to designing AI systems that mitigate bias in peer review is lacking. Our study hence aims to address this gap by formulating design principles for (gen)AI-augmented review systems. Utilizing an echeloned design science research (DSR) methodology, the project seeks to develop new design knowledge and create a prototype system incorporating these principles.
Recommended Citation
Meske, Christian; Eisenhardt, Daniel; Šešelja, Dunja; Straßer, Christian; and Schneider, Johannes, "Mitigating Bias In Academic Publishing: Towards Responsible (Gen)Ai-Augmentation In Peer-Rewiew Processes" (2024). MCIS 2024 Proceedings. 30.
https://aisel.aisnet.org/mcis2024/30