Loading...

Media is loading
 

Paper Type

Complete

Description

ChatGPT demonstrates the advanced natural language capabilities of generative AI-based chatbots. Nevertheless, its sophisticated responses are prone to errors. Prior research has explored the effects of errors on retrieval-based chatbot adoption, but there is a lack of incorporating the advancements of generative-based chatbot’s natural language capabilities. This study investigates the effects of generative-based chatbots and errors on chatbot adoption. To achieve this, a 2 (retrieval-based vs. generative-based chatbot) x 2 (error-free vs. error) experiment investigating a car recommendation set was conducted. The study shows that errors are less likely to be noticed in the generative-based chatbot condition, and participants assign higher levels of competence towards the generative-based chatbot. Furthermore, the higher assigned levels of competence positively impact trust and rapport as drivers of chatbot adoption. The results provide theoretical contributions towards the role of competence and relational elements in chatbot adoption research, and highlight issues of erroneous AI interactions.

Paper Number

1178

Comments

SIG ADIT

Share

COinS
Top 25 Paper Badge
 
Aug 10th, 12:00 AM

Towards the Dark Side of AI Adoption: How Generative AI Extenuates the Perception of Chatbot Errors

ChatGPT demonstrates the advanced natural language capabilities of generative AI-based chatbots. Nevertheless, its sophisticated responses are prone to errors. Prior research has explored the effects of errors on retrieval-based chatbot adoption, but there is a lack of incorporating the advancements of generative-based chatbot’s natural language capabilities. This study investigates the effects of generative-based chatbots and errors on chatbot adoption. To achieve this, a 2 (retrieval-based vs. generative-based chatbot) x 2 (error-free vs. error) experiment investigating a car recommendation set was conducted. The study shows that errors are less likely to be noticed in the generative-based chatbot condition, and participants assign higher levels of competence towards the generative-based chatbot. Furthermore, the higher assigned levels of competence positively impact trust and rapport as drivers of chatbot adoption. The results provide theoretical contributions towards the role of competence and relational elements in chatbot adoption research, and highlight issues of erroneous AI interactions.

When commenting on articles, please be friendly, welcoming, respectful and abide by the AIS eLibrary Discussion Thread Code of Conduct posted here.