Paper Number
ICIS2025-1043
Paper Type
Short
Abstract
Generative Artificial Intelligence (GenAI) is rapidly becoming a fixture in public policymaking, yet contemporary research often emphasizes its technical benefits while overlooking how institutional actors manipulate outputs to align with political objectives. That omission leaves unexamined how AI-generated insights are reframed, softened, or delayed within bureaucratic processes. The present study addresses this gap by examining how public institutions adapt to, contest, and reconfigure GenAI in practice. Rather than streamlining decision-making, GenAI redistributes discretion: upward to political elites who filter outputs for strategic alignment; and downward to technical staff who recalibrate conclusions under pressure. Findings reveal a form of algorithmic pliability, where the credibility of AI depends less on computational logic than on how it is negotiated within institutional hierarchies. This research advances theory by repositioning AI as a politically malleable actor and informs practice by identifying structural conditions that either constrain or enable the ethical use of AI in governance.
Recommended Citation
Kuika Watat, Josue; Jonathan, Gideon Mekonnen; and Zhang, Lidan, "Why Generative Artificial Intelligence Does Not Survive First Contact with Bureaucracy" (2025). ICIS 2025 Proceedings. 1.
https://aisel.aisnet.org/icis2025/public_is/public_is/1
Why Generative Artificial Intelligence Does Not Survive First Contact with Bureaucracy
Generative Artificial Intelligence (GenAI) is rapidly becoming a fixture in public policymaking, yet contemporary research often emphasizes its technical benefits while overlooking how institutional actors manipulate outputs to align with political objectives. That omission leaves unexamined how AI-generated insights are reframed, softened, or delayed within bureaucratic processes. The present study addresses this gap by examining how public institutions adapt to, contest, and reconfigure GenAI in practice. Rather than streamlining decision-making, GenAI redistributes discretion: upward to political elites who filter outputs for strategic alignment; and downward to technical staff who recalibrate conclusions under pressure. Findings reveal a form of algorithmic pliability, where the credibility of AI depends less on computational logic than on how it is negotiated within institutional hierarchies. This research advances theory by repositioning AI as a politically malleable actor and informs practice by identifying structural conditions that either constrain or enable the ethical use of AI in governance.
When commenting on articles, please be friendly, welcoming, respectful and abide by the AIS eLibrary Discussion Thread Code of Conduct posted here.
Comments
20-PublicIS