Paper Number

ECIS2025-2046

Paper Type

CRP

Abstract

Driven by the maturing of Large Language Models (LLMs), companies have begun to implement Conversational Agents (CAs) (e.g., chatbots) for customer service. CAs are often designed to appear human-like (e.g., with a human name and avatar), which increases service satisfaction. However, LLMs are prone to "hallucinations" (i.e., generating inaccurate or non-existent information). In this research, we want to investigate this LLM-specific error type. Following the algorithm aversion theory, errors are more penalized by algorithms. We hypothesize that hallucinations follow the same rule. Based on the Computers-are-Social-Actors (CASA) theory, this expectation should transfer to human-like CAs. The results of our online experiment support that perceived humanness positively affects service satisfaction and mitigates the negative effect of hallucination. For theory, we provide evidence that hallucinations follow other types of errors. For practitioners, we recommend implementing a human-like CA based on an LLM.

Author Connect URL

https://authorconnect.aisnet.org/conferences/ECIS2025/papers/ECIS2025-2046

Author Connect Link

Share

COinS
 
Jun 18th, 12:00 AM

Can you imagine? How Perceived Humanness Influences the Negative Effect of Hallucinations by Conversational Agents

Driven by the maturing of Large Language Models (LLMs), companies have begun to implement Conversational Agents (CAs) (e.g., chatbots) for customer service. CAs are often designed to appear human-like (e.g., with a human name and avatar), which increases service satisfaction. However, LLMs are prone to "hallucinations" (i.e., generating inaccurate or non-existent information). In this research, we want to investigate this LLM-specific error type. Following the algorithm aversion theory, errors are more penalized by algorithms. We hypothesize that hallucinations follow the same rule. Based on the Computers-are-Social-Actors (CASA) theory, this expectation should transfer to human-like CAs. The results of our online experiment support that perceived humanness positively affects service satisfaction and mitigates the negative effect of hallucination. For theory, we provide evidence that hallucinations follow other types of errors. For practitioners, we recommend implementing a human-like CA based on an LLM.

When commenting on articles, please be friendly, welcoming, respectful and abide by the AIS eLibrary Discussion Thread Code of Conduct posted here.