Abstract

Conversational agents interact with users via the most natural interface: human language. A prerequisite for their successful diffusion across use cases is user trust. Following extant research, it is reasonable to assume that increasing the human-likeness of conversational agents represents an effective trust-inducing design strategy. The present article challenges this assumption by considering an opposing theoretical perspective on the human-agent trust-relationship. Based on an extensive review of the two conflicting theoretical positions and related empirical findings, we posit that the agent substitution type (human-like vs. computer-like) represents a situational determinant on the trust-inducing effect of anthropomorphic design. We hypothesize that this is caused by user expectations and beliefs. A multi-method approach is proposed to validate our research model and to understand the cognitive processes triggered by anthropomorphic cues in varying situations. By explaining the identified theoretical contradiction and providing design suggestions, we derive meaningful insights for both researchers and practitioners.

Share

COinS