Paper Type

Complete

Abstract

Despite the excitement around AI as a game-changing technology, an AI productivity paradox exists wherein scientists report a gap between AI advancements and actual gains in labor productivity. Therefore, this study attempts to understand how the AI productivity paradox could be explained through positive and negative valences associated with trust in AI at the individual employee level. Moreover, this study proposes that companies can employ proactive measures to alleviate employee concerns and enhance AI adoption to unlock productivity bottlenecks. The data for this study was collected from MTurk respondents using an experimental survey design. The findings demonstrate that open communication is pivotal in building trust, and trust in AI could exhibit both system-like and human-like attributes. Moreover, trust in AI may create ambivalence by simultaneously advancing intentions and triggering reactance, with human-like trust having a slightly greater impact on intentions. Theoretical, practical, and social implications are discussed.

Paper Number

1148

Author Connect URL

https://authorconnect.aisnet.org/conferences/AMCIS2024/papers/1148

Comments

SIGOSRA

Author Connect Link

Share

COinS
 
Aug 16th, 12:00 AM

Dual Perspectives: Navigating the Ambiguity of Trust in AI

Despite the excitement around AI as a game-changing technology, an AI productivity paradox exists wherein scientists report a gap between AI advancements and actual gains in labor productivity. Therefore, this study attempts to understand how the AI productivity paradox could be explained through positive and negative valences associated with trust in AI at the individual employee level. Moreover, this study proposes that companies can employ proactive measures to alleviate employee concerns and enhance AI adoption to unlock productivity bottlenecks. The data for this study was collected from MTurk respondents using an experimental survey design. The findings demonstrate that open communication is pivotal in building trust, and trust in AI could exhibit both system-like and human-like attributes. Moreover, trust in AI may create ambivalence by simultaneously advancing intentions and triggering reactance, with human-like trust having a slightly greater impact on intentions. Theoretical, practical, and social implications are discussed.

When commenting on articles, please be friendly, welcoming, respectful and abide by the AIS eLibrary Discussion Thread Code of Conduct posted here.