Abstract
As artificial intelligence (AI) continues its rapid evolution, ethical considerations become increasingly critical. This study presents an analytical approach to assessing the perceived importance, alignment, and implementation of Responsible AI (RAI) principles within organizations. An extensive survey collected insights from 82 AI experts across industries. The findings reveal clear patterns in how RAI principles are prioritized. Principles like privacy, security, reliability, and safety received the highest importance ratings, reflecting their status as foundational elements. Principles such as benevolence and non-maleficence were viewed as moderately important, while transparency, fairness, and inclusiveness were relatively lower priorities. This prioritization is also reflected in perceptions of alignment and implementation, with the higher-rated principles demonstrating stronger organizational alignment and operationalization. The results suggest that organizations may face challenges in effectively addressing certain RAI principles, potentially due to factors like varying expertise, resource constraints, and the complexity of translating textbase principles into concrete algorithmic implementations
Recommended Citation
Akbarighatar, Pouria; O. Pappas, Ilias; Vassilakopoulou, Polyxeni; and Purao, Sandeep, "Responsible AI Principles: Findings from an Empirical Study on Practitioners' Perceptions" (2024). UK Academy for Information Systems Conference Proceedings 2024. 1.
https://aisel.aisnet.org/ukais2024/1