ACIS 2024 Proceedings
Abstract
In today's organizational landscape, the workforce is expanding to include not only human members but also intelligent systems, broadening the definition of what constitutes a workforce. As Artificial Intelligence (AI) becomes increasingly integrated into office environments, understanding the factors influencing office workers' (OWs) trust in AI-enabled applications (AI-EAs) is essential. This research focuses on reducing uncertainty by exploring the role of trust and innovative organizational culture (IOC) in the adoption of AI technologies. Guided by Uncanny Valley Effect (UVE), the Computers As Functional Actors (CAFA) paradigm, the Computers As Social Actors (CASA) model, and the Uncertainty Reduction Theory (URT), this study examines how affective and cognitive trust impact OWs' trust (TRT) and willingness to use AI-EAs. Affective trust, rooted in the CASA model, arises from AI's emotional and social cues, fostering familiarity and social connection by mimicking human interactions. This form of trust reduces social uncertainty and promotes emotional comfort. Cognitive trust, based on the CAFA model, stems from AI's transparency, reliability, and performance, emphasizing system-like characteristics that ensure precision and consistency. This form of trust reduces technical uncertainty and builds confidence in AI's functionality. The research also explores how a supportive and innovative organizational culture (IOC) influences office workers' willingness to adopt AI technologies. By creating an atmosphere that assists experimentation, imagination, and openness to new ideas, IOC plays a crucial role in reducing uncertainty. By examining the interplay between system like (SL) and human like (HL) features, alongside the Uncanny Valley Effect (UVE) and the impact of IOC, this study aims to provide comprehensive insights into the aspects affecting willingness to use AI-EAs among OWs. These findings offer valuable guidance for companies, technology designers and human resources managers in designing and implementing AI-EA solutions that balance SL and HL features to optimize user trust and acceptance. Understanding the interplay between IOC and trust in AI is crucial for effective AI adoption and utilization, ultimately enhancing organizational performance and innovation.
Recommended Citation
Singh, Dheeraj and Chandra, Shalini, "Mitigating Uncertainty and Enhancing Trust in AI: Harmonizing Human-Like, System-Like Features with Innovative Organizational Culture" (2024). ACIS 2024 Proceedings. 22.
https://aisel.aisnet.org/acis2024/22