As a relatively new IT artifact, voice assistants are growing in abilities to interact with users in a more natural way with the fast advancement of Artificial Intelligence technology, speaking out human-like voice and understanding voice information. Although prior studies have revealed that voice human-likeness of software agents could improve users’ evaluation and experience, it remains unknown that how the inseparable voice (speaking) and understanding (listening) conversation qualities of voice assistants conjointly impact users’ social perception and trust. Based on cognitive consistency theory, we propose that it exists a congruency effect between these two communication qualities, users may perceive lower social presence and more unlikely to trust when they experienced mismatched human-like conversation qualities. An online survey will be conducted to collect data, and polynomial modeling and response surface methodology will be used to test our congruency hypotheses. Potential implications for theory and practice are also discussed.
Hu, Peng; Wang, Kun; and Liu, Jingwen, "Speaking and Listening: Mismatched Human-like Conversation Qualities Undermine Social Perception and Trust in AI-based Voice Assistants" (2019). PACIS 2019 Proceedings. 81.