Abstract

Organizations are increasingly adopting artificial intelligence (AI)-based technologies to enhance efficiency and productivity. While AI can offer novel opportunities and benefits, such as better decision-making, reducing human errors and automating mundane tasks, it can also bring workplace challenges (Lee et al., 2023). Although challenges to adopting workplace AI can be technical, they can often be social, behavioral, and affective (Gkinko & Elbanna, 2023). One key issue in building and sustaining support for AI technologies is trust. Trust is an important element in reducing user uncertainty and skepticism when developing and adopting workplace AI (Lukyanenko et al., 2022; Wong et al., 2024). While there is a growing body of research acknowledging the importance of understanding trust in AI, much research focuses solely on the individual level, and there is a need for more research that examines how multiple stakeholders can influence user trust formation. Drawing on a multi-stakeholder lens, this study aims to examine how various stakeholders can influence user trust/mistrust towards AI. Rather than examining workplace AI more broadly, the focus of the study is on AI-enabled engagement using virtual assistants and chatbots which are increasingly being extended from the home into the workplace environment. Virtual assistants include voice-based assistants, e.g., Apple Siri, Google Assistant, Amazon Alexa for Business, as well as text-based assistants (chatbots). A key point raised is that viewing trust from a multi-stakeholder lens can help illustrate how different stakeholders at play (e.g., government, organizations, vendors) can influence user trust/mistrust formation in workplace AI. This lens is illustrated by drawing on 26 semi-structured interviews from North American organizations across different levels and industry types.

Share

COinS