Abstract

Digital misinformation spreads through engagement-oriented social media channels at an unprecedented scale and speed, overwhelming conventional verification mechanisms (Wei et al., 2022; Moravec et al., 2019). Lower barriers to content creation, algorithmic amplification, limited regulatory oversight, behavioural biases, and market-driven editorial incentives combine to create a polluted information ecosystem that undermines users’ ability to discern information veracity. To mitigate this diffusion and enhance user resilience, fact-checking labels, appearing as content-level cues, have emerged as a pivotal design response. These interventions promote critical thinking by flagging dubious claims while safeguarding user autonomy and facilitating unobstructed information flow (Nasery et al., 2023). Prior studies on the persuasiveness of these interventions yield mixed results, pointing to their situational effectiveness and the underlying user perceptions that affect credibility heuristics. Literature also largely neglects age-based differences in media truth discernment and responses to fact-checking, even though older adults are disproportionately exposed and highly susceptible to misinformation (Brashier & Schacter, 2020). Artificial intelligence (AI) systems play a dual role: they facilitate the creation of highly realistic, deceptive content; on the other hand, they contribute to fraud prevention through tools such as automated fact-checking. These capabilities may be perceived differently across age groups (Zhou et al., 2025). To address the foregoing gaps, this study investigates: (1) How do users’ credibility judgments vary when corrective information is attributed to expert panels versus AI? (RQ2) To what extent do these judgments influence belief revision and engagement patterns, considering cognitive dissonance and age? Grounded in bounded rationality (Simon, 1955), the Heuristic–Systematic Model (Eagly & Chaiken, 1993), and selective-engagement theory (Hess, 2014), we examine the pathways through which provenance cues, invoking a diverse array of heuristics that influence user responses, are transformed into credibility judgments and behavioural intentions. The outcomes will integrate the ideal of mechanical objectivity (Sundar, 2008) with recent advancements in algorithmic decision-making (Jussupow et al., 2024). Guided by the literature (Bhattacherjee, 2023; Lutz et al., 2024), this study employs a sequential mixed-methods design. Phase 1 utilizes a 2 × 2 between-subjects online experiment in which younger and older adults evaluate health-related misinformation accompanied by corrective labels attributed to AI or human experts; validated psychometric instruments capture credibility judgements and individual differences. Phase 2 replicates this factorial design in a NeuroIS laboratory, augmenting self-reports with electrodermal activity (affective arousal), eye-tracking (visual attention), and EEG (analytic versus intuitive processing). This phase will also utilize a mobile user experience facility that extends participation to individuals with limited mobility. Triangulating these neurophysiological indicators with questionnaire data yields richer insights into decision pathways and offsets the inherent limitations of behavioural measures (Dimoka et al., 2012). Findings will advance understanding of situational fact-checking efficacy through a dual-route persuasion framework integrating individual differences, extending theory on human–AI collaboration in misinformation, and informing age-responsive design principles. The derived insights will inform the design of transparent, trustworthy corrective systems, thus guiding platform engineers, public-health communicators, and policymakers to promote user autonomy and mitigate misinformation effectively.

Comments

tpp1376

Share

COinS