The failure of human-AI augmentation is a common problem that is usually believed to be highly related to poor AI design and human’s inability to identify appropriate AI suggestions, but existing interventions like explainable AI were not effective to solve this problem. We propose that a crucial factor contributing to the failure of human-AI augmentation lies in the withholding of human effort. Moreover, high expectations for AI performance, which is generally positive for AI adoption, may undermine human-AI team performance by reducing human involvement in the task. Based on the Collective Effort Model (CEM), we explore how expectations for AI performance, perceive indispensability and task meaningfulness influence human effort and human-AI team performance. We plan to conduct laboratory experiments in image classification and idea generation to test our hypotheses. We expect to enhance the understanding of human-AI collaboration and the effects of social loafing effect in human-AI teams.


Paper Number 1162; Track HCI; Short Paper


When commenting on articles, please be friendly, welcoming, respectful and abide by the AIS eLibrary Discussion Thread Code of Conduct posted here.