Paper Type
Complete
Abstract
This study addresses a critical gap in understanding why users exhibit reduced oversight when interacting with generative AI systems despite their known limitations. While existing research documents AI errors across domains, theoretical frameworks explaining the underlying psychological mechanisms remain underdeveloped. We propose a comprehensive model of AI complacency by integrating bounded rationality constraints with dual-process information processing models. Our framework demonstrates how Perceived AI Reliability triggers a shift from systematic to heuristic information processing, subsequently reducing vigilance and impairing task performance. The relationship between Perceived AI Reliability and information processing is moderated by three key factors derived from bounded rationality theory: knowledge limitations, cognitive processing capabilities, and time constraints. This study contributes to the literature by identifying psychological mechanisms underlying AI complacency, explaining the processing shifts in human-AI interaction, positioning vigilance as a critical mediating mechanism, and introducing the Vigilance-Reliability Matrix as a tool for identifying different interaction patterns.
Paper Number
2240
Recommended Citation
Nosrati, Saeed and Motaghi, Hamed, "The AI Complacency Model: Integrating Bounded Rationality and Information Processing" (2025). AMCIS 2025 Proceedings. 8.
https://aisel.aisnet.org/amcis2025/social_comput/social_comput/8
The AI Complacency Model: Integrating Bounded Rationality and Information Processing
This study addresses a critical gap in understanding why users exhibit reduced oversight when interacting with generative AI systems despite their known limitations. While existing research documents AI errors across domains, theoretical frameworks explaining the underlying psychological mechanisms remain underdeveloped. We propose a comprehensive model of AI complacency by integrating bounded rationality constraints with dual-process information processing models. Our framework demonstrates how Perceived AI Reliability triggers a shift from systematic to heuristic information processing, subsequently reducing vigilance and impairing task performance. The relationship between Perceived AI Reliability and information processing is moderated by three key factors derived from bounded rationality theory: knowledge limitations, cognitive processing capabilities, and time constraints. This study contributes to the literature by identifying psychological mechanisms underlying AI complacency, explaining the processing shifts in human-AI interaction, positioning vigilance as a critical mediating mechanism, and introducing the Vigilance-Reliability Matrix as a tool for identifying different interaction patterns.
When commenting on articles, please be friendly, welcoming, respectful and abide by the AIS eLibrary Discussion Thread Code of Conduct posted here.
Comments
SOCCOMP