Loading...
Paper Type
Complete
Description
ChatGPT demonstrates the advanced natural language capabilities of generative AI-based chatbots. Nevertheless, its sophisticated responses are prone to errors. Prior research has explored the effects of errors on retrieval-based chatbot adoption, but there is a lack of incorporating the advancements of generative-based chatbot’s natural language capabilities. This study investigates the effects of generative-based chatbots and errors on chatbot adoption. To achieve this, a 2 (retrieval-based vs. generative-based chatbot) x 2 (error-free vs. error) experiment investigating a car recommendation set was conducted. The study shows that errors are less likely to be noticed in the generative-based chatbot condition, and participants assign higher levels of competence towards the generative-based chatbot. Furthermore, the higher assigned levels of competence positively impact trust and rapport as drivers of chatbot adoption. The results provide theoretical contributions towards the role of competence and relational elements in chatbot adoption research, and highlight issues of erroneous AI interactions.
Paper Number
1178
Recommended Citation
Pan, Yan and Pawlik, Phoebe, "Towards the Dark Side of AI Adoption: How Generative AI Extenuates the Perception of Chatbot Errors" (2023). AMCIS 2023 Proceedings. 4.
https://aisel.aisnet.org/amcis2023/sig_adit/sig_adit/4
Towards the Dark Side of AI Adoption: How Generative AI Extenuates the Perception of Chatbot Errors
ChatGPT demonstrates the advanced natural language capabilities of generative AI-based chatbots. Nevertheless, its sophisticated responses are prone to errors. Prior research has explored the effects of errors on retrieval-based chatbot adoption, but there is a lack of incorporating the advancements of generative-based chatbot’s natural language capabilities. This study investigates the effects of generative-based chatbots and errors on chatbot adoption. To achieve this, a 2 (retrieval-based vs. generative-based chatbot) x 2 (error-free vs. error) experiment investigating a car recommendation set was conducted. The study shows that errors are less likely to be noticed in the generative-based chatbot condition, and participants assign higher levels of competence towards the generative-based chatbot. Furthermore, the higher assigned levels of competence positively impact trust and rapport as drivers of chatbot adoption. The results provide theoretical contributions towards the role of competence and relational elements in chatbot adoption research, and highlight issues of erroneous AI interactions.
When commenting on articles, please be friendly, welcoming, respectful and abide by the AIS eLibrary Discussion Thread Code of Conduct posted here.
Comments
SIG ADIT