SIG DSA - Data Science and Analytics for Decision Support

Loading...

Media is loading
 

Paper Type

Complete

Paper Number

1383

Description

Artificial Intelligence (AI) becomes increasingly common, but adoption in sensitive use cases lacks due to the black-box character of AI hindering auditing and trust-building. Explainable AI (XAI) promises to make AI transparent, allowing for auditing and increasing user trust. However, in sensitive use cases maximizing trust is not the goal, rather to balance caution and trust to find the level of appropriate trust. Studies on user perception of XAI in professional contexts and especially for sensitive use cases are scarce. We present the results of a case study involving domain-experts as users of a prototype XAI-based IS for decision support in the quality assurance in pharmaceutical manufacturing. We find that for this sensitive use case, simply delivering an explanation falls short if it does not match the beliefs of experts on what information is critical for a certain decision to be reached. Unsuitable explanations override all other quality criteria. Suitable explanations can, together with other quality criteria, lead to a suitable balance of trust and caution in the system. Based on our case study we discuss design options in this regard.

Comments

SIG DSA

Share

COinS
 
Aug 10th, 12:00 AM

Caution or Trust in AI? How to design XAI in sensitive Use Cases?

Artificial Intelligence (AI) becomes increasingly common, but adoption in sensitive use cases lacks due to the black-box character of AI hindering auditing and trust-building. Explainable AI (XAI) promises to make AI transparent, allowing for auditing and increasing user trust. However, in sensitive use cases maximizing trust is not the goal, rather to balance caution and trust to find the level of appropriate trust. Studies on user perception of XAI in professional contexts and especially for sensitive use cases are scarce. We present the results of a case study involving domain-experts as users of a prototype XAI-based IS for decision support in the quality assurance in pharmaceutical manufacturing. We find that for this sensitive use case, simply delivering an explanation falls short if it does not match the beliefs of experts on what information is critical for a certain decision to be reached. Unsuitable explanations override all other quality criteria. Suitable explanations can, together with other quality criteria, lead to a suitable balance of trust and caution in the system. Based on our case study we discuss design options in this regard.

When commenting on articles, please be friendly, welcoming, respectful and abide by the AIS eLibrary Discussion Thread Code of Conduct posted here.