Paper Type
Short
Paper Number
PACIS2025-1485
Description
With the advancement of artificial intelligence and speech synthesis technologies, individuals are increasingly interacting with voice-based intelligent agents (e.g., Apple Siri, OpenAI ChatGPT, Watsonx Assistant, and XiaoAi) for tasks such as information retrieval, online shopping, and navigation. Traditionally, these agents use standardized human-like voices; however, a growing trend allows users to personalize them by cloning the voices of significant others. Although voice customization enhances user engagement and experience, it also raises ethical concerns, particularly regarding the risks associated with deepfakes. Given these complexities, a deeper understanding of its psychological and behavioral effects is essential. This study aims to fill this gap by examining the effects of audio customization. Specifically, drawing on the transference literature, we explore how a customized voice influences users' perceptions of intelligent agents and their subsequent behaviors. We will conduct a lab experiment in a controlled human-agent interaction environment to test our hypotheses.
Recommended Citation
Zhu, Fenfen and Choi, Ben, "When Intelligent Agents Sound Like My Friend: Understanding Transference Effects in Voice-Based Interactions" (2025). PACIS 2025 Proceedings. 4.
https://aisel.aisnet.org/pacis2025/is_adoption/is_adoption/4
When Intelligent Agents Sound Like My Friend: Understanding Transference Effects in Voice-Based Interactions
With the advancement of artificial intelligence and speech synthesis technologies, individuals are increasingly interacting with voice-based intelligent agents (e.g., Apple Siri, OpenAI ChatGPT, Watsonx Assistant, and XiaoAi) for tasks such as information retrieval, online shopping, and navigation. Traditionally, these agents use standardized human-like voices; however, a growing trend allows users to personalize them by cloning the voices of significant others. Although voice customization enhances user engagement and experience, it also raises ethical concerns, particularly regarding the risks associated with deepfakes. Given these complexities, a deeper understanding of its psychological and behavioral effects is essential. This study aims to fill this gap by examining the effects of audio customization. Specifically, drawing on the transference literature, we explore how a customized voice influences users' perceptions of intelligent agents and their subsequent behaviors. We will conduct a lab experiment in a controlled human-agent interaction environment to test our hypotheses.
Comments
Innovation