Abstract
Artificial Intelligence (AI) systems are increasingly embedded in high-impact decision-making contexts such as healthcare, hiring, and financial services. While much of the literature has focused on explainability and fairness, fewer studies have examined how users respond when AI outputs are inaccurate. Given that real-world AI is probabilistic and often imperfect, understanding how output inaccuracy affects trust and continu-ance intentions is critical for designing robust and trustworthy systems. Drawing on trust calibration theory (Lee & See, 2004) and information systems continuance models (Bhattacherjee, 2001), this study investigates how users adjust their trust after experiencing incorrect AI out-puts and whether transparency mechanisms such as confidence scores or uncertainty indicators can mitigate trust erosion. We theorize that output inaccuracy undermines perceived system reliability, reducing both trust and intention to continue using the AI. However, if the system acknowledges its limitations or flags uncer-tain results, users may calibrate their trust more appropriately and remain engaged. We propose a 2x2 between-subjects experiment involving participants interacting with a decision-support AI system in a simulated task (i.e., evaluating résumés). The two factors are: (1) AI Output Accuracy (accurate vs. inaccurate) and (2) Transparency Mechanism (present vs. absent). Dependent variables include trust, perceived reliability, and continuance intention. The study will assess changes in trust over time and explore whether users override or defer to AI recommendations after errors occur. Controls variables will include in-dividual characteristics such as algorithm aversion and propensity to trust. This research offers three contributions. First, it extends trust-in-AI literature by examining user responses to inaccuracy. Second, it examines transparency mechanisms that may buffer the negative effects of errors on trust. Third, it provides practical guidance for AI designers seeking to build systems that retain user trust de-spite the inevitable failures in complex, real-world AI applications.
Recommended Citation
Ziegelmayer, Jennifer L., "When AI Gets It Wrong: Investigating the Effects of Output Inaccuracy on User Trust and Continuance Intentions" (2025). AMCIS 2025 TREOs. 177.
https://aisel.aisnet.org/treos_amcis2025/177
Comments
tpp1399