Loading...

Media is loading
 

Paper Number

ICIS2025-1959

Paper Type

Complete

Abstract

As organizations deploy generative AI (GenAI) through conversational agents (CAs) to deliver personalized services, user trust becomes essential. Compared to scripted CAs, GenAI-based CAs generate open-ended, adaptive outputs, heightening opacity and privacy concerns. Yet, the joint effects of key interface cues, anthropomorphism and transparency, on trust remain unclear. Drawing on signaling theory and the theory of anthropomorphism, we examine how these cues shape trust in GenAI-based CAs through user perceptions. We conduct a 2x2 online experiment (N = 490), manipulating anthropomorphism and transparency. We show that both cues independently foster trust, but their combination can reduce it. Moreover, transparency not only increases trust directly but also lowers perceived privacy risk, a central user concern in GenAI interactions. Our findings extend signaling theory to the GenAI context and provide actionable guidance for designing trustworthy CAs, highlighting the importance of balancing human-like qualities with intelligible data-use explanations.

Comments

12-GenAI

Share

COinS
 
Dec 14th, 12:00 AM

Trusting Generative AI-based Conversational Agents: The Role of Anthropomorphism and Transparency as Trust Signals

As organizations deploy generative AI (GenAI) through conversational agents (CAs) to deliver personalized services, user trust becomes essential. Compared to scripted CAs, GenAI-based CAs generate open-ended, adaptive outputs, heightening opacity and privacy concerns. Yet, the joint effects of key interface cues, anthropomorphism and transparency, on trust remain unclear. Drawing on signaling theory and the theory of anthropomorphism, we examine how these cues shape trust in GenAI-based CAs through user perceptions. We conduct a 2x2 online experiment (N = 490), manipulating anthropomorphism and transparency. We show that both cues independently foster trust, but their combination can reduce it. Moreover, transparency not only increases trust directly but also lowers perceived privacy risk, a central user concern in GenAI interactions. Our findings extend signaling theory to the GenAI context and provide actionable guidance for designing trustworthy CAs, highlighting the importance of balancing human-like qualities with intelligible data-use explanations.

When commenting on articles, please be friendly, welcoming, respectful and abide by the AIS eLibrary Discussion Thread Code of Conduct posted here.