Paper Type
Complete
Abstract
Despite the excitement around AI as a game-changing technology, an AI productivity paradox exists wherein scientists report a gap between AI advancements and actual gains in labor productivity. Therefore, this study attempts to understand how the AI productivity paradox could be explained through positive and negative valences associated with trust in AI at the individual employee level. Moreover, this study proposes that companies can employ proactive measures to alleviate employee concerns and enhance AI adoption to unlock productivity bottlenecks. The data for this study was collected from MTurk respondents using an experimental survey design. The findings demonstrate that open communication is pivotal in building trust, and trust in AI could exhibit both system-like and human-like attributes. Moreover, trust in AI may create ambivalence by simultaneously advancing intentions and triggering reactance, with human-like trust having a slightly greater impact on intentions. Theoretical, practical, and social implications are discussed.
Paper Number
1148
Recommended Citation
Bansal, Gaurav; Axelton, Zhuoli; and Agarwal, Pooja, "Dual Perspectives: Navigating the Ambiguity of Trust in AI" (2024). AMCIS 2024 Proceedings. 3.
https://aisel.aisnet.org/amcis2024/sig_osra/sig_osra/3
Dual Perspectives: Navigating the Ambiguity of Trust in AI
Despite the excitement around AI as a game-changing technology, an AI productivity paradox exists wherein scientists report a gap between AI advancements and actual gains in labor productivity. Therefore, this study attempts to understand how the AI productivity paradox could be explained through positive and negative valences associated with trust in AI at the individual employee level. Moreover, this study proposes that companies can employ proactive measures to alleviate employee concerns and enhance AI adoption to unlock productivity bottlenecks. The data for this study was collected from MTurk respondents using an experimental survey design. The findings demonstrate that open communication is pivotal in building trust, and trust in AI could exhibit both system-like and human-like attributes. Moreover, trust in AI may create ambivalence by simultaneously advancing intentions and triggering reactance, with human-like trust having a slightly greater impact on intentions. Theoretical, practical, and social implications are discussed.
When commenting on articles, please be friendly, welcoming, respectful and abide by the AIS eLibrary Discussion Thread Code of Conduct posted here.
Comments
SIGOSRA