Abstract
Artificial Intelligence (AI) is transforming organizational decision-making, service delivery, and workforce structures, but unchecked adoption risks undermining employee trust and organizational legitimacy (Belanche, Belk, Casaló, & Flavián, 2024; Papagiannidis, Mikalef, Conboy, & Van de Wetering, 2023). This study examines the dark side of AI by integrating Socio-Technical Systems Theory (Bostrom, Gupta, & Thomas, 2009) and Trust Calibration Theory (Kuipers, 2022) to propose a model linking job displacement risk, privacy concerns, perceived bias, and governance opacity to key workforce outcomes. Employee trust and perceived fairness are positioned as mediators influencing job satisfaction and organizational legitimacy. We draw on recent empirical findings to develop a set of testable hypotheses and propose a research design employing surveys, fairness audits, and trust calibration measures to validate the model. The paper contributes to the AI governance literature by providing (1) a categorization of AI risk factors grounded in recent work on ethical AI, (2) a theory-driven model that explains how these risks shape workforce outcomes, and (3) actionable managerial implications for executives seeking to design responsible AI governance frameworks. By addressing psychological, ethical, and organizational dimensions, this research provides a roadmap for aligning AI deployment with both performance objectives and stakeholder trust.
Recommended Citation
Abayomi, Olushola Bunmi and Noordeen, Abdul Rahman, "Trusting the Black Box - Dark Side of AI Risks and Ethical Concerns" (2025). Proceedings of the 2025 Pre-ICIS SIGDSA Symposium. 73.
https://aisel.aisnet.org/sigdsa2025/73