Paper Number
ECIS2025-1219
Paper Type
CRP
Abstract
Recent developments in artificial intelligence (AI) have made AI increasingly useful in helping humans solving problems. Yet, it remains unclear how to design the interaction with AI to help humans in problem-solving. This mixed-methods study investigates how humans respond to help from AI in the Remote Associates Test (RAT). In this test for problem-solving, humans answer word association questions and then either request (self-invoked) or receive proactive (AI-invoked) help from the AI before answering the same questions again. Our experimental results show that AI-invoked (vs. self-invoked) help causes a greater number of response changes and more correct responses. However, AI-invoked (vs. self-invoked) help leads to over-reliance on AI, as humans are more likely to devalue their initial correct response. In subsequent interviews, we identify the factors that contribute to over-reliance on AI-invoked help. This study contributes to understand over-reliance in human-computer interaction (HCI) and provides insights for designing HCI effectively.
Recommended Citation
Goutier, Marc; Diebel, Christopher; Adam, Martin; and Benlian, Alexander, "Humans Over-rely On Help From Artificial Intelligence In Problem-Solving" (2025). ECIS 2025 Proceedings. 5.
https://aisel.aisnet.org/ecis2025/hci/hci/5
Humans Over-rely On Help From Artificial Intelligence In Problem-Solving
Recent developments in artificial intelligence (AI) have made AI increasingly useful in helping humans solving problems. Yet, it remains unclear how to design the interaction with AI to help humans in problem-solving. This mixed-methods study investigates how humans respond to help from AI in the Remote Associates Test (RAT). In this test for problem-solving, humans answer word association questions and then either request (self-invoked) or receive proactive (AI-invoked) help from the AI before answering the same questions again. Our experimental results show that AI-invoked (vs. self-invoked) help causes a greater number of response changes and more correct responses. However, AI-invoked (vs. self-invoked) help leads to over-reliance on AI, as humans are more likely to devalue their initial correct response. In subsequent interviews, we identify the factors that contribute to over-reliance on AI-invoked help. This study contributes to understand over-reliance in human-computer interaction (HCI) and provides insights for designing HCI effectively.
When commenting on articles, please be friendly, welcoming, respectful and abide by the AIS eLibrary Discussion Thread Code of Conduct posted here.