Abstract
We examine how generative AI image creation creates information security vulnerabilities by exploiting cognitive processes—processing fluency, cognitive absorption, and perceived image-text congruence—that threat actors can leverage to enhance the credibility of misinformation. This social engineering attack vector operates at the content-creation stage, where creators' cognitive biases become exploitable weaknesses. In a realistic Instagram-style interface, the study (N = 404) tested users creating images for climate-related posts. Results reveal a dual-pathway model: generative processing fluency reduced perceived image–text congruence, whereas cognitive absorption increased it. Both pathways operated independently but influenced truth discernment differently. Perceived congruence acted as a credibility heuristic, increasing belief in both true and false posts without improving discrimination. OpenAI users showed stronger perceived congruence and weaker fluency effects than Ideogram users, suggesting that they are more vulnerable to misinformation. Lab settings dampened some of the patterns observed in the online study setting. Counterintuitively, more GenAI interaction fostered critical evaluation, while immersive cognitive absorption impaired accuracy discrimination. These patterns inform platform design (encouraging deliberate iteration) and intervention targeting (absorbed users vulnerable to conflating processing ease with credibility)
Recommended Citation
Akinyemi, John Patrick; Jarvenpaa, Sirkka L.; and Gunda, Thushara, "When Fluency Misleads: Congruence, Absorption, and Truth Discernment in GenAI Image Creation" (2025). WISP 2025 Proceedings. 16.
https://aisel.aisnet.org/wisp2025/16