Abstract

Trust is a complex, multidisciplinary construct with roots in diverse areas including psychology, management, and information systems. Trust is defined as “a risky choice of making oneself dependent on the actions of another in a situation of uncertainty, based upon some expectation of whether the other will act in a benevolent fashion despite an opportunity to betray” (Thielmann and Hilbig, (2015), p. 251). Most research in this domain is focused on trust among humans. However, as personal and professional functions are increasingly supported with intelligent capabilities, individuals are expected to trust artificial intelligence (AI) based systems to make decisions and take actions on situations in which they may have important stakes. This raises important questions about trust in AI-based decisions and systems. Do we trust human output differently than AI output? Are individuals even aware that intelligent systems are driving their decisions? And at what point are the stakes high enough that decision makers would be unwilling, or willing, give up control to AI-based systems? Across a range of topics, including operational systems, forecasting song popularity, and the success of romantic partner matching, trust in AI is found to be higher than that in humans. Laypersons generally trusted AI output more unless they themselves were experts in that area, in which case they trust their own judgement. However, as experts make choices outside of the advice provided by AI, their accuracy decreases. Trust in AI is not uniform; it is lower among those who did not have a strong background in quantitative concepts and among experts who simply may not have been open to the advice. Trust in AI systems also varies based on the performance, process, or purpose of the automated system and the operator’s self-efficacy in operating it manually. This raises further questions - under what contexts do decision makers comfortably rely on AI-based systems to guide their decisions? Does this change if decision makers insufficiently distinguish between an automated and AI-based system? Our study examines these questions through the lens of system use and disuse and individual trust in AI-based systems. Participants will be presented with vignettes of situations in which they may have high or low stakes. For each scenario, they will respond to survey items that explore their perceptions of their stakes in the outcome, trust in allowing an AI-based system to drive the outcome, and willingness to give up control. Implications of the study include recommendations for integration of trust elements in systems design and development including process and outcome transparency. Expected outcomes also include insights into system use and disuse as linked with trust. And perhaps most importantly, the study will explore how best to uniformly educate users on the benefits and caveats of AI-based decision making. REFERENCES 1. Thielmann, I., & Hilbig, B. E. (2015). Trust: An integrative review from a person–situation perspective. Review of General Psychology, 19(3), 249-277

Abstract Only

Share

COinS