Paper Type

Complete

Abstract

Deepfake technologies pose emerging cybersecurity threats by leveraging AI-generated content to deceive individuals. While prior research has examined susceptibility to text- and voice-based phishing, deepfakes enhance social engineering tactics through heightened realism. This study applies the Elaboration Likelihood Model (ELM) to investigate how key contextual factors—information quality, source credibility, disclaimers, and engagement—shape perceived trust in deepfakes. Using a 2×2 between-subjects experiment with 326 university students, we manipulated disclaimer presence and engagement levels within a deepfake video displayed on a simulated social media platform. Results indicate that disclaimers reduce both perceived information quality and source credibility, which in turn influence perceived trust, while engagement affects only information quality. These findings extend ELM to deepfake contexts, providing theoretical insights into persuasion in rich media environments. The results can inform the design of interventions to mitigate deepfake-driven phishing threats.

Paper Number

1735

Author Connect URL

https://authorconnect.aisnet.org/conferences/AMCIS2025/papers/1735

Comments

SIGSEC

Author Connect Link

Share

COinS
 
Aug 15th, 12:00 AM

Antecedents of Trust in Deepfakes: Insights from the Elaboration Likelihood Model

Deepfake technologies pose emerging cybersecurity threats by leveraging AI-generated content to deceive individuals. While prior research has examined susceptibility to text- and voice-based phishing, deepfakes enhance social engineering tactics through heightened realism. This study applies the Elaboration Likelihood Model (ELM) to investigate how key contextual factors—information quality, source credibility, disclaimers, and engagement—shape perceived trust in deepfakes. Using a 2×2 between-subjects experiment with 326 university students, we manipulated disclaimer presence and engagement levels within a deepfake video displayed on a simulated social media platform. Results indicate that disclaimers reduce both perceived information quality and source credibility, which in turn influence perceived trust, while engagement affects only information quality. These findings extend ELM to deepfake contexts, providing theoretical insights into persuasion in rich media environments. The results can inform the design of interventions to mitigate deepfake-driven phishing threats.

When commenting on articles, please be friendly, welcoming, respectful and abide by the AIS eLibrary Discussion Thread Code of Conduct posted here.