•  
  •  
 
Journal of the Association for Information Systems

Abstract

The use of conversational AI agents (CAs), such as Alexa and Siri, has steadily increased over the past several years. However, the functionality of these agents relies on the personal data obtained from their users. While evidence suggests that user disclosure can be increased through reciprocal self-disclosure (i.e., a process in which a CA discloses information about itself with the expectation that the user would reciprocate by disclosing similar information about themself), it is not clear whether and through which mechanism the process of reciprocal self-disclosure influences users’ post-interaction trust. We theorize that anthropomorphism (i.e., the extent to which a user attributes humanlike attributes to a nonhuman entity) serves as an inductive inference mechanism for understanding reciprocal self-disclosure, enabling users to build conceptually distinct cognitive and affective foundations upon which to form their post-interaction trust. We found strong support for our theory through two randomized experiments that used custom-developed text-based and voice-based CAs. Specifically, we found that reciprocal self-disclosure increases anthropomorphism and anthropomorphism increases cognition-based trustworthiness and affect-based trustworthiness. Our results show that reciprocal self-disclosure has an indirect effect on cognition-based trustworthiness and affect-based trustworthiness and is fully mediated by anthropomorphism. These findings conceptually bridge prior research on motivations of anthropomorphism and research on cognitive and affective bases of trust.

DOI

10.17705/1jais.00839

Share

COinS
 

When commenting on articles, please be friendly, welcoming, respectful and abide by the AIS eLibrary Discussion Thread Code of Conduct posted here.