SIG DSA - Data Science and Analytics for Decision Support
Loading...
Paper Type
Complete
Paper Number
1383
Description
Artificial Intelligence (AI) becomes increasingly common, but adoption in sensitive use cases lacks due to the black-box character of AI hindering auditing and trust-building. Explainable AI (XAI) promises to make AI transparent, allowing for auditing and increasing user trust. However, in sensitive use cases maximizing trust is not the goal, rather to balance caution and trust to find the level of appropriate trust. Studies on user perception of XAI in professional contexts and especially for sensitive use cases are scarce. We present the results of a case study involving domain-experts as users of a prototype XAI-based IS for decision support in the quality assurance in pharmaceutical manufacturing. We find that for this sensitive use case, simply delivering an explanation falls short if it does not match the beliefs of experts on what information is critical for a certain decision to be reached. Unsuitable explanations override all other quality criteria. Suitable explanations can, together with other quality criteria, lead to a suitable balance of trust and caution in the system. Based on our case study we discuss design options in this regard.
Recommended Citation
Kloker, Anika; Fleiß, Jürgen; Koeth, Christoph; Kloiber, Thomas; Ratheiser, Patrick; and Thalmann, Stefan, "Caution or Trust in AI? How to design XAI in sensitive Use Cases?" (2022). AMCIS 2022 Proceedings. 16.
https://aisel.aisnet.org/amcis2022/sig_dsa/sig_dsa/16
Caution or Trust in AI? How to design XAI in sensitive Use Cases?
Artificial Intelligence (AI) becomes increasingly common, but adoption in sensitive use cases lacks due to the black-box character of AI hindering auditing and trust-building. Explainable AI (XAI) promises to make AI transparent, allowing for auditing and increasing user trust. However, in sensitive use cases maximizing trust is not the goal, rather to balance caution and trust to find the level of appropriate trust. Studies on user perception of XAI in professional contexts and especially for sensitive use cases are scarce. We present the results of a case study involving domain-experts as users of a prototype XAI-based IS for decision support in the quality assurance in pharmaceutical manufacturing. We find that for this sensitive use case, simply delivering an explanation falls short if it does not match the beliefs of experts on what information is critical for a certain decision to be reached. Unsuitable explanations override all other quality criteria. Suitable explanations can, together with other quality criteria, lead to a suitable balance of trust and caution in the system. Based on our case study we discuss design options in this regard.
When commenting on articles, please be friendly, welcoming, respectful and abide by the AIS eLibrary Discussion Thread Code of Conduct posted here.
Comments
SIG DSA