Paper Number
1117
Abstract
Conversational agents (CAs) are frequently used in customer service. However, state-of-the-art CAs often fail to understand user input. Such failures represent trust violations and can thus stimulate negative word-of-mouth. Nevertheless, little attention has been paid to these trust violations or to strategies to mitigate associated trust loss. Initial evidence exists that anthropomorphic design may mitigate trust loss following a failure in human–computer interactions. From a provider’s perspective, protecting trust is important to keep users engaged and satisfied. From a user’s perspective, this creates ethical concerns, as anthropomorphic design may manipulate the assessment of the CA. Therefore, we investigated the role of anthropomorphic design as a trust shield. We developed a research model by integrating literature on trust in technology, trust repair, and anthropomorphic design. The results of our experimental study suggest that anthropomorphic design is an effective trust shield. We discuss these findings from theoretical, practical, and ethical perspectives.
Recommended Citation
Seeger, Anna-Maria and Heinzl, Armin, "Chatbots often Fail! Can Anthropomorphic Design Mitigate Trust Loss in Conversational Agents for Customer Service?" (2021). ECIS 2021 Research Papers. 12.
https://aisel.aisnet.org/ecis2021_rp/12
When commenting on articles, please be friendly, welcoming, respectful and abide by the AIS eLibrary Discussion Thread Code of Conduct posted here.