Paper Number
1349
Paper Type
Complete Research Paper
Abstract
Explainable Artificial Intelligence (AI) aims to provide insight into the inner workings of black-box AI systems and thereby increase trust through the provision of local and global explanations. Nonetheless, the precise effects of explanations on AI trust remain ambiguous. We investigate (1) the effect of known trust antecedents on trust over the course of an interaction with an AI-based system, and how (2) a global explanation influences these antecedents, as well as (3) how usage of a system with/without experiencing an expectation violation influences these antecedents, and lastly (4) how the provision of a local explanation influences these antecedents, differentiated by whether an expectation violation had previously been experienced. We found all but one investigated antecedents to be significant predictors of trust. Additionally, we demonstrate the precise effects of global explanations, system usage with and without experiencing an expectation violation, and local explanations on trust antecedents.
Recommended Citation
de Zoeten, Marc Christoph; Ernst, Claus-Peter H.; and Rothlauf, Franz, "The Effect of Explainable AI on AI-Trust and Its Antecedents over the Course of an Interaction" (2024). ECIS 2024 Proceedings. 10.
https://aisel.aisnet.org/ecis2024/track03_ai/track03_ai/10
The Effect of Explainable AI on AI-Trust and Its Antecedents over the Course of an Interaction
Explainable Artificial Intelligence (AI) aims to provide insight into the inner workings of black-box AI systems and thereby increase trust through the provision of local and global explanations. Nonetheless, the precise effects of explanations on AI trust remain ambiguous. We investigate (1) the effect of known trust antecedents on trust over the course of an interaction with an AI-based system, and how (2) a global explanation influences these antecedents, as well as (3) how usage of a system with/without experiencing an expectation violation influences these antecedents, and lastly (4) how the provision of a local explanation influences these antecedents, differentiated by whether an expectation violation had previously been experienced. We found all but one investigated antecedents to be significant predictors of trust. Additionally, we demonstrate the precise effects of global explanations, system usage with and without experiencing an expectation violation, and local explanations on trust antecedents.
When commenting on articles, please be friendly, welcoming, respectful and abide by the AIS eLibrary Discussion Thread Code of Conduct posted here.