Abstract

Recent trends in AI development, exemplified by innovations like automated machine learning and generative AI, have significantly increased the bottom-up organizational deployment of AI. No- and low-code AI tools empower domain experts to develop AI and thus foster organizational innovation. At the same time, the inherent opaqueness of AI, complemented by the abandonment of requirement to follow rigorous IS development and implementation methods, implies a loss of oversight over the IT for individual domain experts and their organization, and inability to account for the regulatory requirements on AI use. We build on expert knowledge of no- and low-code AI deployment in different types of organizations, and the emerging theorizing on weakly structured systems (WSS) to argue that conventional methods of software engineering and IS deployment can’t help organizations harness the risks of innovation-fostering bottom-up developments of ML tools by domain experts. In this research in progress paper we review the inherent risks and limitations of AI - opacity, explainability, bias, and controllability - in the context of ethical and regulatory requirements. We argue that maintaining human oversight is pivotal for the bottom-up ML developments to remain “under control” and suggest directions for future research on how to balance the innovation potential and risk in bottom-up ML development projects.

Share

COinS