Paper Number

2305

Paper Type

Complete

Description

A growing number of organizations are implementing generative AI via chat interfaces, technology commonly known as a conversational agent (CA), to support workers. Although AI is constantly improving, it is unlikely that it will ever be flawless. Generally, mistakes by humans are less penalized in comparison to machines. However, CAs are frequently designed to be human-like, which raises the question: How does perceived humanness of AI influence how users react to generative AI errors? We conducted a 2 × 2 experimental study with 210 participants, analyzing the influence of perceived humanness and error on reliability, and frustration and anger on a cognitive and an affective pathway based on algorithm aversion and computers-are-social-actors theory. We demonstrate that perceived humanness leads to higher perceived reliability, and reduces users’ anger and frustration caused by the error. Therefore, we recommend designing AI interfaces to be human-like to reduce negative emotions associated with AI error.

Comments

04-Work

Share

COinS
 
Dec 15th, 12:00 AM

Anger Against the Algorithm? – The Role of Mindful and Mindless Processing of Errors by Human-Like Generative AI

A growing number of organizations are implementing generative AI via chat interfaces, technology commonly known as a conversational agent (CA), to support workers. Although AI is constantly improving, it is unlikely that it will ever be flawless. Generally, mistakes by humans are less penalized in comparison to machines. However, CAs are frequently designed to be human-like, which raises the question: How does perceived humanness of AI influence how users react to generative AI errors? We conducted a 2 × 2 experimental study with 210 participants, analyzing the influence of perceived humanness and error on reliability, and frustration and anger on a cognitive and an affective pathway based on algorithm aversion and computers-are-social-actors theory. We demonstrate that perceived humanness leads to higher perceived reliability, and reduces users’ anger and frustration caused by the error. Therefore, we recommend designing AI interfaces to be human-like to reduce negative emotions associated with AI error.