Abstract

Trust is a key determinant of whether users adopt and rely on Artificial Intelligence (AI) systems in decision support contexts (Glikson & Woolley, 2020). While algorithm aversion research suggests that users often abandon AI tools after seeing minor errors or mistakes (Dietvorst et al., 2015), studies in human-computer interaction have shown that AI agents with empathetic tone or communication style can enhance perceived trust and likability (Brave et al., 2005). This study explores the trade-off between accuracy and tone in shaping user trust in AI. We propose an experiment where participants will undertake a simple task using AI agents. Agent-1 responds with an empathetic tone and elaborative explanations, while Agent-2 is neutral and concise. Before the actual task, participants will be given a chance to interact with the two AI agents in a controlled environment (preformatted Q&A) and then decide which agents to pick for completing the tasks. This step serves as a behavioral measure of initial trust. Based on affective computing and cognitive trust literature, we hypothesize that most participants will prefer Agent-1, aligning with prior findings that empathetic AI agents may foster faster and stronger trust perceptions (Tsumura & Yamada, 2024). In Part 2, the selected AI agent completes a multi-step task. During the process, the AI deliberately makes controlled errors (e.g., exceeding a word limit). Participants are provided with feedback and given the option to switch agents after each step. This design allows us to measure error tolerance by comparing how quickly users who initially chose Agent-1 versus Agent-2 decide to switch agents. We hypothesize users who start with the empathetic agent to exhibit greater error tolerance due to affective bonds formed earlier (de Visser et al., 2016), while those choosing the neutral agent. This research contributes to the literature on trust in AI and affective computing by identifying how communication style and performance jointly influence the formation and recalibration of user trust using a cognitive lens. The findings will inform the design of AI systems in domains where both emotional engagement and decision reliability are important, such as healthcare, education, and customer service. References Brave, S., Nass, C., & Hutchinson, K. (2005). Computers that care: Investigating the effects of orientation of emotion exhibited by an embodied computer agent. International Journal of Human-Computer Studies, 62(2), 161–178. Dietvorst, B. J., Simmons, J. P., & Massey, C. (2015). Algorithm aversion: People erroneously avoid algorithms after seeing them err. Journal of Experimental Psychology: General, 144(1), 114–126. de Visser, E. J., et al. (2016). Almost human: Anthropomorphism increases trust resilience in cognitive agents. J. of Experimental Psychology: Applied, 22(3), 331–349. Glikson, E., & Woolley, A. W. (2020). Human trust in artificial intelligence: Review of empirical research. Academy of Management Annals, 14(2), 627–660. McKnight, D. H., Choudhury, V., & Kacmar, C. (2002). Developing and validating trust measures for e-commerce: An integrative typology. Information Systems Research, 13(3), 334–359. Tsumura, T., & Yamada, S. (2024). Making an agent’s trust stable in a series of success and failure tasks through empathy. Frontiers in Computer Science, 6, 1461131.

Comments

tpp1378

Share

COinS