Abstract

Artificial intelligence (AI) has been criticized for its black- box nature that confuses how outputs are derived. Some have proposed that explainable artificial intelligence (XAI) can address the issue and enhance users’ trust in AI. Drawing on the lens of persuasion theory, we develop a research model that depicts how explanation with vividness and user characteristics independently and jointly shape trust in AI. To test the model and associated hypotheses, we conduct an online experiment. The results suggest that individual characteristics not only directly affect trust but also moderate the relationship between explanation vividness and trust.

Share

COinS