Abstract

Artificial Intelligence (AI) ethics research is a multifaceted field, requiring different theoretical justifications in which researchers can ground their underlying perspectives on ethics. We provide an overview of the major norma- tive ethical theories used in Information Systems research on AI ethics. Through a systematic scoping review, we assess the prevailing theories, their progress, and areas needing further study. Our findings reveal a dominance of deontological ethics, which results in determining ethics mainly from the AI’s perspective by discussing ethical design principles but not from how a human user’s virtue ethics perspective guides humans’ moral behavior when collaborating with AI equally. We suggest that researchers recognize how normative ethical theories might bind their work, impacting their understanding of moral agency and responsibility and guiding Corporate Digital Responsibility practices for organizations striving for responsible AI design, deployment, and usage.

Share

COinS