Management Information Systems Quarterly
Abstract
The growing integration of AI into professional domains amplifies the epistemic and relational stakes for domain experts—ranging from financial advisors to medical and legal professionals—who must entrust their clients to emerging AI-based counterparts. For these domain experts in triadic relationships, trusting AI is challenging, as AI’s inscrutability and autonomy provoke fundamental trust tensions of opacity vs. performance and replacement vs. complementarity. Drawing on an in-depth longitudinal case study at a large European bank that established a robo-advisor alongside domain experts and their clients, we investigated how domain experts navigate these trust tensions. We found that they interpret the AI-based counterpart with regard to their relational networks (relational interpretation) and subsequently adapt these networks (relational adaptation) to cope with emerging vulnerabilities. Domain experts develop trust through three stages that engender specific sequences of relational interpretation and adaptation, leading to avoided, safeguarded, and, ultimately, accepted vulnerabilities. We contribute to research on trust in AI by overcoming the focus on dyadic perspectives involving AI and users to uncover complex interrelationships between different trustors and specific trust tensions. Going beyond technology-centric perspectives on trust-building processes, our longitudinal analysis reveals relational perspectives on trusting beliefs and behaviors in a multistage process where influences of the social context precede and shape trust in AI.