Artificial Intelligence (AI) systems rapidly reshape industries, organizational structures, and decision-making processes. Despite their transformative potential, the pathway toward responsible, ethical, and trustworthy AI adoption remains fraught with assumptions, blind spots, and systemic risks. Recent policy initiatives such as the European Union’s proposed AI Act highlight the critical importance of responsibly managing AI applications in the workplace, underscoring the growing global urgency around mitigating AI’s adverse effects on employee autonomy, fairness, transparency in decision-making, and inclusivity. Moreover, the widespread diffusion of Responsible AI guidelines and ethical principles demonstrates the commitment to proactively address and prevent unintended ethical consequences of AI deployments (Jobin et al., 2019). Numerous high-level frameworks – ranging from policy guidelines issued by governments to industry-driven codes of conduct – have attempted to define what “responsible” or “trustworthy” AI means in practice (Klenk, 2024; Dwivedi & Kshetri, 2023). Despite this normative clarity, both researchers and practitioners often struggle to operationalize these demands in real-world scenarios (Mittelstadt, 2019; Resseguier & Rodrigues, 2020).
Track Co-Chairs:
Aizhan Tursunbayeva, University of Naples Parthenope
Ward van Zoonen, Vrije Universiteit Amsterdam & University of Jyvaskyla
Ksenia Keplinger, Max Planck Institute for Intelligent Systems
Subscribe to RSS Feed (Opens in New Window)