AI in Business and Society
Loading...
Paper Number
1813
Paper Type
short
Description
The opaque and incomprehensible nature of artificial intelligence (AI) raises questions about who can and will take responsibility for AI in organizations. Examining the relation between explainability and responsibility, we explore how AI responsibility attributions unfold when 1) AI shifts tasks and roles of individuals and 2) these individuals lack comprehension of the task elements they are expected to take responsibility for. Through an in-depth qualitative field study in a large organization, we identify three types of responsibility attributions in decision-making with AI: a shared responsibility, a data science-centered responsibility, and a business domain expert-centered responsibility. These three prevalent types of responsibility attributions will be explained by the interaction of different shifts in AI-related tasks and corresponding AI explainability needs and actions in the organization. Our study contributes to the existing literature by demonstrating AI's impact on traditional responsibility assignment in day-to-day organizational practices.
Recommended Citation
Thuis, Tamara; Li, Ting; and van Heck, Eric, "Who Takes Responsibility for AI? A Field Study on AI-Related Task Shifts, Explainability, and Responsibility Attributions" (2023). ICIS 2023 Proceedings. 3.
https://aisel.aisnet.org/icis2023/aiinbus/aiinbus/3
Who Takes Responsibility for AI? A Field Study on AI-Related Task Shifts, Explainability, and Responsibility Attributions
The opaque and incomprehensible nature of artificial intelligence (AI) raises questions about who can and will take responsibility for AI in organizations. Examining the relation between explainability and responsibility, we explore how AI responsibility attributions unfold when 1) AI shifts tasks and roles of individuals and 2) these individuals lack comprehension of the task elements they are expected to take responsibility for. Through an in-depth qualitative field study in a large organization, we identify three types of responsibility attributions in decision-making with AI: a shared responsibility, a data science-centered responsibility, and a business domain expert-centered responsibility. These three prevalent types of responsibility attributions will be explained by the interaction of different shifts in AI-related tasks and corresponding AI explainability needs and actions in the organization. Our study contributes to the existing literature by demonstrating AI's impact on traditional responsibility assignment in day-to-day organizational practices.
When commenting on articles, please be friendly, welcoming, respectful and abide by the AIS eLibrary Discussion Thread Code of Conduct posted here.
Comments
10-AI