
Author ORCID Identifier
David Horneber: https://orcid.org/0000-0001-6866-5146
Abstract
The rapid development of artificial intelligence (AI) systems has raised concerns about their ethical, legal, and social risks. Despite notable progress in the development of responsible AI frameworks, methods, and tools, research shows that many organizations struggle to effectively implement responsible AI. I review prior research on responsible AI to explain the insufficient implementation of responsible AI in organizations. Drawing on neo-institutional theory, I find that policy-practice decoupling (i.e., organizational responsible AI policies are adopted but not implemented in practice) and means-end decoupling (i.e., organizational responsible AI policies are implemented in practice but do not achieve their intended goals) can explain the ineffective implementation of responsible AI, with AI practitioners playing a key role as institutional entrepreneurs or custodians in driving or inhibiting the implementation of responsible AI. I contribute to the literature on responsible AI by exploring the institutional pressures that drive or inhibit its implementation, synthesizing the challenges to its implementation, and providing an overview of the roles and strategies AI practitioners use to deal with the implementation of responsible AI. I propose several avenues for future research and discuss implications for research and practice.
Recommended Citation
Horneber, D. (In press). Understanding the Implementation of Responsible Artificial Intelligence in Organizations: A Neo-Institutional Theory Perspective. Communications of the Association for Information Systems, 57, pp-pp. Retrieved from https://aisel.aisnet.org/cais/vol57/iss1/8
When commenting on articles, please be friendly, welcoming, respectful and abide by the AIS eLibrary Discussion Thread Code of Conduct posted here.