Paper Type

Complete

Abstract

This study addresses a critical gap in understanding why users exhibit reduced oversight when interacting with generative AI systems despite their known limitations. While existing research documents AI errors across domains, theoretical frameworks explaining the underlying psychological mechanisms remain underdeveloped. We propose a comprehensive model of AI complacency by integrating bounded rationality constraints with dual-process information processing models. Our framework demonstrates how Perceived AI Reliability triggers a shift from systematic to heuristic information processing, subsequently reducing vigilance and impairing task performance. The relationship between Perceived AI Reliability and information processing is moderated by three key factors derived from bounded rationality theory: knowledge limitations, cognitive processing capabilities, and time constraints. This study contributes to the literature by identifying psychological mechanisms underlying AI complacency, explaining the processing shifts in human-AI interaction, positioning vigilance as a critical mediating mechanism, and introducing the Vigilance-Reliability Matrix as a tool for identifying different interaction patterns.

Paper Number

2240

Author Connect URL

https://authorconnect.aisnet.org/conferences/AMCIS2025/papers/2240

Comments

SOCCOMP

Author Connect Link

Share

COinS
 
Aug 15th, 12:00 AM

The AI Complacency Model: Integrating Bounded Rationality and Information Processing

This study addresses a critical gap in understanding why users exhibit reduced oversight when interacting with generative AI systems despite their known limitations. While existing research documents AI errors across domains, theoretical frameworks explaining the underlying psychological mechanisms remain underdeveloped. We propose a comprehensive model of AI complacency by integrating bounded rationality constraints with dual-process information processing models. Our framework demonstrates how Perceived AI Reliability triggers a shift from systematic to heuristic information processing, subsequently reducing vigilance and impairing task performance. The relationship between Perceived AI Reliability and information processing is moderated by three key factors derived from bounded rationality theory: knowledge limitations, cognitive processing capabilities, and time constraints. This study contributes to the literature by identifying psychological mechanisms underlying AI complacency, explaining the processing shifts in human-AI interaction, positioning vigilance as a critical mediating mechanism, and introducing the Vigilance-Reliability Matrix as a tool for identifying different interaction patterns.

When commenting on articles, please be friendly, welcoming, respectful and abide by the AIS eLibrary Discussion Thread Code of Conduct posted here.