Paper Type
Complete
Abstract
This study examines the governance challenges of Large Language Models (LLMs) in healthcare through the lens of human-machine interaction configurations. By systematically analyzing "Human-in-the-Loop," "Human-on-the-Loop," and "Human-out-of-the-Loop" paradigms, we identify distinct risk profiles and their generative mechanisms across these configurations. Contrary to conventional assumptions, we demonstrate that increased human involvement does not linearly correlate with enhanced system safety. Our findings reveal an "impossible trinity" in healthcare LLM governance—no single configuration can simultaneously optimize for efficiency, safety, and ethical alignment. Drawing upon socio-technical systems theory and empirical case analyses, we propose a multi-dimensional governance framework encompassing technological adaptations, institutional safeguards, and distributed accountability mechanisms. This research contributes to information systems literature by shifting from technocentric approaches toward a dynamic understanding of how risks emerge from specific human-machine relational structures, ultimately providing pathways to transform healthcare LLMs from potential "silent accomplices" in medical errors to "intelligent colleagues" in clinical decision-making.
Paper Number
2106
Recommended Citation
Zhang, Yuxin; Yan, Haiyan; and Qi, Jiayin, "From Silent Accomplices to Intelligent Colleagues: Governing Risks in Healthcare LLMs Through Dynamic Human-Machine Loop Configurations" (2025). AMCIS 2025 Proceedings. 2.
https://aisel.aisnet.org/amcis2025/intelfuture/intelfuture/2
From Silent Accomplices to Intelligent Colleagues: Governing Risks in Healthcare LLMs Through Dynamic Human-Machine Loop Configurations
This study examines the governance challenges of Large Language Models (LLMs) in healthcare through the lens of human-machine interaction configurations. By systematically analyzing "Human-in-the-Loop," "Human-on-the-Loop," and "Human-out-of-the-Loop" paradigms, we identify distinct risk profiles and their generative mechanisms across these configurations. Contrary to conventional assumptions, we demonstrate that increased human involvement does not linearly correlate with enhanced system safety. Our findings reveal an "impossible trinity" in healthcare LLM governance—no single configuration can simultaneously optimize for efficiency, safety, and ethical alignment. Drawing upon socio-technical systems theory and empirical case analyses, we propose a multi-dimensional governance framework encompassing technological adaptations, institutional safeguards, and distributed accountability mechanisms. This research contributes to information systems literature by shifting from technocentric approaches toward a dynamic understanding of how risks emerge from specific human-machine relational structures, ultimately providing pathways to transform healthcare LLMs from potential "silent accomplices" in medical errors to "intelligent colleagues" in clinical decision-making.
When commenting on articles, please be friendly, welcoming, respectful and abide by the AIS eLibrary Discussion Thread Code of Conduct posted here.
Comments
IntelFuture