Paper Number
2305
Paper Type
Complete
Description
A growing number of organizations are implementing generative AI via chat interfaces, technology commonly known as a conversational agent (CA), to support workers. Although AI is constantly improving, it is unlikely that it will ever be flawless. Generally, mistakes by humans are less penalized in comparison to machines. However, CAs are frequently designed to be human-like, which raises the question: How does perceived humanness of AI influence how users react to generative AI errors? We conducted a 2 × 2 experimental study with 210 participants, analyzing the influence of perceived humanness and error on reliability, and frustration and anger on a cognitive and an affective pathway based on algorithm aversion and computers-are-social-actors theory. We demonstrate that perceived humanness leads to higher perceived reliability, and reduces users’ anger and frustration caused by the error. Therefore, we recommend designing AI interfaces to be human-like to reduce negative emotions associated with AI error.
Recommended Citation
Bellger, Mariam; Brendel, Benedikt; Hildebrandt, Fabian; and Lichtenberg, Sascha, "Anger Against the Algorithm? – The Role of Mindful and Mindless Processing of Errors by Human-Like Generative AI" (2024). ICIS 2024 Proceedings. 6.
https://aisel.aisnet.org/icis2024/digtech_fow/digtech_fow/6
Anger Against the Algorithm? – The Role of Mindful and Mindless Processing of Errors by Human-Like Generative AI
A growing number of organizations are implementing generative AI via chat interfaces, technology commonly known as a conversational agent (CA), to support workers. Although AI is constantly improving, it is unlikely that it will ever be flawless. Generally, mistakes by humans are less penalized in comparison to machines. However, CAs are frequently designed to be human-like, which raises the question: How does perceived humanness of AI influence how users react to generative AI errors? We conducted a 2 × 2 experimental study with 210 participants, analyzing the influence of perceived humanness and error on reliability, and frustration and anger on a cognitive and an affective pathway based on algorithm aversion and computers-are-social-actors theory. We demonstrate that perceived humanness leads to higher perceived reliability, and reduces users’ anger and frustration caused by the error. Therefore, we recommend designing AI interfaces to be human-like to reduce negative emotions associated with AI error.
Comments
04-Work