Abstract

The development and deployment of Artificial intelligence (AI) in organizations is of growing interest to the Information systems (IS) discipline. This can be approached from a sociotechnical perspective contributing to managing the unintended outcomes of AI while extending AI use boundaries. This paper presents the findings from a systematic literature review on organizational maturity and readiness for AI development and development. A key result is that extant research has lost sight of AI systems' humanistic and ethical aspects, and principles related to responsible AI are not sufficiently defined. This is a hurdle because principles for responsible AI are fundamental for AI development and deployment ensuring long-term benefits. Drawing from the literature review findings, we provide a conceptual maturity model with two main dimensions (responsible and instrumental), twelve conditions and thirty factors. The maturity and readiness factors for responsible AI are deduced from synthesizing 35 articles in related literature. Specifically, the paper identified six capabilities for responsible AI: AI model, Cooperative AI, ethical awareness, laws & regulations, data governance and continuous improvement; six instrumental capabilities were also identified: strategic alignment, technology, culture, data management, financial and human resource management.

Share

COinS