Paper Type
Complete
Abstract
Deepfake technologies pose emerging cybersecurity threats by leveraging AI-generated content to deceive individuals. While prior research has examined susceptibility to text- and voice-based phishing, deepfakes enhance social engineering tactics through heightened realism. This study applies the Elaboration Likelihood Model (ELM) to investigate how key contextual factors—information quality, source credibility, disclaimers, and engagement—shape perceived trust in deepfakes. Using a 2×2 between-subjects experiment with 326 university students, we manipulated disclaimer presence and engagement levels within a deepfake video displayed on a simulated social media platform. Results indicate that disclaimers reduce both perceived information quality and source credibility, which in turn influence perceived trust, while engagement affects only information quality. These findings extend ELM to deepfake contexts, providing theoretical insights into persuasion in rich media environments. The results can inform the design of interventions to mitigate deepfake-driven phishing threats.
Paper Number
1735
Recommended Citation
Gal, Steven and Bulgurcu, Burcu, "Antecedents of Trust in Deepfakes: Insights from the Elaboration Likelihood Model" (2025). AMCIS 2025 Proceedings. 53.
https://aisel.aisnet.org/amcis2025/sig_sec/sig_sec/53
Antecedents of Trust in Deepfakes: Insights from the Elaboration Likelihood Model
Deepfake technologies pose emerging cybersecurity threats by leveraging AI-generated content to deceive individuals. While prior research has examined susceptibility to text- and voice-based phishing, deepfakes enhance social engineering tactics through heightened realism. This study applies the Elaboration Likelihood Model (ELM) to investigate how key contextual factors—information quality, source credibility, disclaimers, and engagement—shape perceived trust in deepfakes. Using a 2×2 between-subjects experiment with 326 university students, we manipulated disclaimer presence and engagement levels within a deepfake video displayed on a simulated social media platform. Results indicate that disclaimers reduce both perceived information quality and source credibility, which in turn influence perceived trust, while engagement affects only information quality. These findings extend ELM to deepfake contexts, providing theoretical insights into persuasion in rich media environments. The results can inform the design of interventions to mitigate deepfake-driven phishing threats.
When commenting on articles, please be friendly, welcoming, respectful and abide by the AIS eLibrary Discussion Thread Code of Conduct posted here.
Comments
SIGSEC