AI in Business and Society
Loading...
Paper Number
1592
Paper Type
Completed
Description
Many researchers and practitioners see artificial intelligence as a game changer compared to classical statistical models. However, some software providers engage in “AI washing”, relabeling solutions that use simple statistical models as AI systems. By contrast, research on algorithm aversion unsystematically varied the labels for advisors and treated labels such as "artificial intelligence" and "statistical model" synonymously. This study investigates the effect of individual labels on users' actual advice utilization behavior. Through two incentivized online within-subjects experiments on regression tasks, we find that labeling human advisors with labels that suggest higher expertise leads to an increase in advice-taking, even though the content of the advice remains the same. In contrast, our results do not suggest such an expert effect for advice-taking from algorithms, despite differences in self-reported perception. These findings challenge the effectiveness of framing intelligent systems as AI-based systems and have important implications for both research and practice.
Recommended Citation
Leffrang, Dirk and Mueller, Oliver, "AI Washing: The Framing Effect of Labels on Algorithmic Advice Utilization" (2023). ICIS 2023 Proceedings. 10.
https://aisel.aisnet.org/icis2023/aiinbus/aiinbus/10
AI Washing: The Framing Effect of Labels on Algorithmic Advice Utilization
Many researchers and practitioners see artificial intelligence as a game changer compared to classical statistical models. However, some software providers engage in “AI washing”, relabeling solutions that use simple statistical models as AI systems. By contrast, research on algorithm aversion unsystematically varied the labels for advisors and treated labels such as "artificial intelligence" and "statistical model" synonymously. This study investigates the effect of individual labels on users' actual advice utilization behavior. Through two incentivized online within-subjects experiments on regression tasks, we find that labeling human advisors with labels that suggest higher expertise leads to an increase in advice-taking, even though the content of the advice remains the same. In contrast, our results do not suggest such an expert effect for advice-taking from algorithms, despite differences in self-reported perception. These findings challenge the effectiveness of framing intelligent systems as AI-based systems and have important implications for both research and practice.
When commenting on articles, please be friendly, welcoming, respectful and abide by the AIS eLibrary Discussion Thread Code of Conduct posted here.
Comments
10-AI