Paper Number
ICIS2025-1417
Paper Type
Complete
Abstract
Ever more frequent and intense collaboration with agents based on Large Language Models (LLMs) at work and in daily life raises the question of whether this affects how humans view and treat each other. We conducted a randomized laboratory experiment with 158 participants who collaborated with either a human or an LLM-based assistant to solve a complex language task. Afterwards, we measured whether the type of collaborator influenced participants’ prosocial attitudes (through implicit association tests) and behavior (in dictator games). Interacting with an LLM-based assistant led to a reduction of prosociality, but only for participants who identified as female. A mediation analysis suggests that these findings are due to an erosion of trust in the LLM-based assistant's benevolence in the female subsample. Such spillover effects of collaborating with AI on interactions between humans must feature in the evaluation of the societal consequences of artificial intelligence and warrant further research.
Recommended Citation
Pisch, Frank; Rossmann, Vitus; Jussupow, Ekaterina; Ingendahl, Franziska; and Undorf, Monika, "Collaborating with LLM-based Chatbots Can Reduce Prosociality" (2025). ICIS 2025 Proceedings. 12.
https://aisel.aisnet.org/icis2025/hti/hti/12
Collaborating with LLM-based Chatbots Can Reduce Prosociality
Ever more frequent and intense collaboration with agents based on Large Language Models (LLMs) at work and in daily life raises the question of whether this affects how humans view and treat each other. We conducted a randomized laboratory experiment with 158 participants who collaborated with either a human or an LLM-based assistant to solve a complex language task. Afterwards, we measured whether the type of collaborator influenced participants’ prosocial attitudes (through implicit association tests) and behavior (in dictator games). Interacting with an LLM-based assistant led to a reduction of prosociality, but only for participants who identified as female. A mediation analysis suggests that these findings are due to an erosion of trust in the LLM-based assistant's benevolence in the female subsample. Such spillover effects of collaborating with AI on interactions between humans must feature in the evaluation of the societal consequences of artificial intelligence and warrant further research.
When commenting on articles, please be friendly, welcoming, respectful and abide by the AIS eLibrary Discussion Thread Code of Conduct posted here.
Comments
15-Interaction