Location

Hilton Hawaiian Village, Honolulu, Hawaii

Event Website

https://hicss.hawaii.edu/

Start Date

3-1-2024 12:00 AM

End Date

6-1-2024 12:00 AM

Description

The rise of ChatGPT has revealed the potential of chatbots and other conversational AI tools to assist humans in fields such as law and healthcare, where the best human experts can engage in empathetic conversations. The belief is that if chatbots can connect with humans on a social and emotional level, they can reduce the cognitive effort required by humans to solve their problems, while increasing user satisfaction and trust. Although existing research has shown that empathy is crucial for designing human-AI conversations and their outcomes (effort, helpfulness, trust), it fails to separate the impact of empathy in language display from the AI’s underlying ”cognitive” abilities, like logical reasoning. To address this gap, this research aims to develop and empirically test a theory of empathy in the language displayed by conversational AI, explaining the relational outcomes of human-AI conversations in terms of cognitive effort, helpfulness, and trustworthiness. Using this theory, a chatbot is designed using syntactic and rhetorical linguistic elements that evoke empathy when providing legal services to tenants renting property. Through a randomized controlled experiment with a 2 by 3 factorial design, the effects of this empathetic chatbot on three relational outcomes in human-AI conversations are examined and compared to a non-empathetic chatbot that maintains the same logic. A baseline model utilizing non-conversational access to legal services via frequently asked questions (”FAQs”) is also implemented, and the subjects’ emotional state (anger) is manipulated as a moderating factor. The study involves 277 participants randomly assigned to one of six groups. The findings demonstrate the significance of both main and interaction effects on trustworthiness, usefulness, and cognitive effort. The results indicate that subtle changes in language syntax and style can have substantial implications for the outcomes of human-AI conversations. These findings contribute to the growing literature on conversational AI and have practical implications for the design of conversational and generative AI.

Share

COinS
 
Jan 3rd, 12:00 AM Jan 6th, 12:00 AM

The Impact of Empathy Display in Language of Conversational AI: A Controlled Experiment with a Legal Chatbo

Hilton Hawaiian Village, Honolulu, Hawaii

The rise of ChatGPT has revealed the potential of chatbots and other conversational AI tools to assist humans in fields such as law and healthcare, where the best human experts can engage in empathetic conversations. The belief is that if chatbots can connect with humans on a social and emotional level, they can reduce the cognitive effort required by humans to solve their problems, while increasing user satisfaction and trust. Although existing research has shown that empathy is crucial for designing human-AI conversations and their outcomes (effort, helpfulness, trust), it fails to separate the impact of empathy in language display from the AI’s underlying ”cognitive” abilities, like logical reasoning. To address this gap, this research aims to develop and empirically test a theory of empathy in the language displayed by conversational AI, explaining the relational outcomes of human-AI conversations in terms of cognitive effort, helpfulness, and trustworthiness. Using this theory, a chatbot is designed using syntactic and rhetorical linguistic elements that evoke empathy when providing legal services to tenants renting property. Through a randomized controlled experiment with a 2 by 3 factorial design, the effects of this empathetic chatbot on three relational outcomes in human-AI conversations are examined and compared to a non-empathetic chatbot that maintains the same logic. A baseline model utilizing non-conversational access to legal services via frequently asked questions (”FAQs”) is also implemented, and the subjects’ emotional state (anger) is manipulated as a moderating factor. The study involves 277 participants randomly assigned to one of six groups. The findings demonstrate the significance of both main and interaction effects on trustworthiness, usefulness, and cognitive effort. The results indicate that subtle changes in language syntax and style can have substantial implications for the outcomes of human-AI conversations. These findings contribute to the growing literature on conversational AI and have practical implications for the design of conversational and generative AI.

https://aisel.aisnet.org/hicss-57/cl/ethics/2