The rapid proliferation of digital technologies has reshaped information access and dissemination, posing unique vulnerabilities to misinformation and online scams among older adults. Despite advancements in AI-driven security tools and human-in-the-loop systems, there remain significant gaps in understanding how this demographic interacts with digital platforms. Their susceptibility to online scams, heightened by cognitive declines, limited security awareness, and the prevalent digital divide, has led to disproportionately high financial losses. This study aims to investigate factors influencing older adults’ responses to misinformation and how AI security aid affects trust dynamics and decision-making. Employing a multi-theoretical framework, we focus on bounded rational decision-making and build upon the Extended Parallel Process Model, examining coping mechanisms informed by cognitive appraisal of threats and efficacy. Selective Engagement Theory and Prospect Theory further guide our views on strategic mental resource allocation and risk behaviors, shaping how misinformation is perceived and acted upon. For an older adult population, this study seeks to address: (1) How does AI security aid, designed to verify information, affect trust and coping responses in an online misinformation context, and how does this compare to human fact-checkers? (2) How do cognitive and emotional coping efforts vary with content framing (gain vs. loss)? Our model, grounded in the literature, considers individual variables of perceived risk, AI perception, and digital literacy to assess their influence on adaptive and maladaptive behaviors. Additionally, we investigate how framing and cognitive biases, particularly motivational drives for acquisition and defense, shape decision-making and threat responses. This study employs a quantitative scenario-based experimental design using a 2x2 factorial framework to manipulate external assistance levels (AI security aid vs. human fact-checkers) and framing (gain vs. loss). To enhance ecological validity, the research will be covertly framed around digital media usability. NeuroIS techniques (EEG, EDA, facial expression analysis, and eye-tracking) will unobtrusively measure user engagement, emotional arousal, and perceived risk during simulated threat encounters. Post-interaction surveys capture users’ perceptions and individual factors. By triangulating neurophysiological data with surveys, the study will gain holistic insights while addressing survey biases. The study provides insights for academic and practitioner audiences. By identifying older adults' specific vulnerabilities to online misinformation, it addresses literature gaps and informs the design of educational programs on misinformation trends and AI. The framework offers practical guidance for developing tools that enhance content validation, and strengthen resilience against misinformation, supporting older adults' digital independence. It highlights how content framing can be strategically used to counter cognitive biases, informing educational and policy initiatives designed for aging populations.