AI in Business and Society
Paper Number
1930
Paper Type
short
Description
As artificial intelligence (AI) systems increasingly make impactful decisions in the workplace, issues of explainability have gained prominence. However, current debates around explainability of AI either take on a technical perspective or focus on the use of AI for augmentation, in which professionals can decide to ignore or override AI outputs when hindered by opacity. Given that current AI tools have the increasing ability to act on their own, this calls for a deeper understanding of how professionals manage explainability in cases of AI automation. Building on a comparative field study, we identify different practices that professionals enacted to produce post hoc explanations to clients of decisions made by AI tools. These practices varied depending on whether professionals relied on their own expertise versus AI techniques and whether they deeply engaged with the AI tool in constructing explanations. Our preliminary findings yield important implications for the literature on AI and professions.
Recommended Citation
Mayer, Anne; van den Broek, Elmira; Karacic, Tomislav; and Huysman, Marleen, "Navigating explainability: A comparative field study of how professionals explain AI-made decisions to clients" (2023). ICIS 2023 Proceedings. 12.
https://aisel.aisnet.org/icis2023/aiinbus/aiinbus/12
Navigating explainability: A comparative field study of how professionals explain AI-made decisions to clients
As artificial intelligence (AI) systems increasingly make impactful decisions in the workplace, issues of explainability have gained prominence. However, current debates around explainability of AI either take on a technical perspective or focus on the use of AI for augmentation, in which professionals can decide to ignore or override AI outputs when hindered by opacity. Given that current AI tools have the increasing ability to act on their own, this calls for a deeper understanding of how professionals manage explainability in cases of AI automation. Building on a comparative field study, we identify different practices that professionals enacted to produce post hoc explanations to clients of decisions made by AI tools. These practices varied depending on whether professionals relied on their own expertise versus AI techniques and whether they deeply engaged with the AI tool in constructing explanations. Our preliminary findings yield important implications for the literature on AI and professions.
When commenting on articles, please be friendly, welcoming, respectful and abide by the AIS eLibrary Discussion Thread Code of Conduct posted here.
Comments
10-AI