Paper Number

ICIS2025-1164

Paper Type

Complete

Abstract

This study investigates the effectiveness of trust repair strategies implemented by AI-based conversational agents (CA) following errors. We compare system-like trust repair strategies based on the eXplainable AI paradigm (local explanations, counterfactual options) to human-like repair strategies grounded in the Computers as Social Actors Theory (apology, asking questions). An online experiment was conducted in which 357 participants interacted with a CA in a simulated e-commerce customer service scenario. The CA committed either a misunderstanding or a non-understanding error and then implemented one of four trust repair strategies. Results show that both human-like (apology) and system-like (local explanation, counterfactual options) strategies significantly increased post-interaction trust compared to no repair, but asking questions did not. Overall, no significant difference was found between human-like and system-like strategies. Self-repair strategies were more effective than user-assisted repair. The study provides insights into AI-implemented trust repair, highlighting the potential of both human-like and system-like repair strategies.

Comments

16-UserBehavior

Share

COinS
 
Dec 14th, 12:00 AM

Can “AI” Repair Trust? Comparing Human-Like and System-Like Repair Strategies

This study investigates the effectiveness of trust repair strategies implemented by AI-based conversational agents (CA) following errors. We compare system-like trust repair strategies based on the eXplainable AI paradigm (local explanations, counterfactual options) to human-like repair strategies grounded in the Computers as Social Actors Theory (apology, asking questions). An online experiment was conducted in which 357 participants interacted with a CA in a simulated e-commerce customer service scenario. The CA committed either a misunderstanding or a non-understanding error and then implemented one of four trust repair strategies. Results show that both human-like (apology) and system-like (local explanation, counterfactual options) strategies significantly increased post-interaction trust compared to no repair, but asking questions did not. Overall, no significant difference was found between human-like and system-like strategies. Self-repair strategies were more effective than user-assisted repair. The study provides insights into AI-implemented trust repair, highlighting the potential of both human-like and system-like repair strategies.

When commenting on articles, please be friendly, welcoming, respectful and abide by the AIS eLibrary Discussion Thread Code of Conduct posted here.