Abstract

Disclosing the identity of AI service chatbots raises a delegation issue, as companies assign customer service tasks to AI agents. Existing research primarily adopts a capability-centered perspective and provides limited insight into AI delegation problems. This study examines (1) the effect of AI identity disclosure of service chatbots on consumers’ trust and (2) how the delegation stage, the formality of response, and interaction modality moderate this effect. We adopt a Principal-Agent Theory perspective and develop relevant hypotheses. Results show that disclosing a service chatbot’s AI identity triggers negative emotional responses, reducing perceived trustworthiness. Chatbots presented with high interaction modalities mitigate these negative effects compared to those with low interaction modalities. Also, the negative effect of AI identity disclosure is stronger in the post-purchase stage and when responses are presented with lower formality. Implications for theory and practice are discussed.

Share

COinS