Paper Type

ERF

Abstract

The rise of deepfake technology has introduced significant privacy, ethical, and security concerns, particularly for individuals considering creating digital content. There has been research on the direct victims of deepfakes, but little attention has been paid to how the perceived threat of deepfakes discourages new creators from participating on digital platforms and limits the diversity of knowledge shared online. This study applies the Extended Parallel Process Model (EPPM) to examine how threat appraisal (perceived severity and susceptibility), coping appraisal (self-efficacy and response efficacy), and fear influence aspiring content creators’ intentions to adopt anonymity, pseudonymity, or withdraw from digital participation. Using a mixed-methods approach, we will conduct quantitative surveys and qualitative interviews to analyze these behavioral responses. The findings will provide insights into digital self-censorship, misinformation deterrence, and platform policies, contributing to discussions on cybersecurity and digital trust in the era of generative AI.

Paper Number

1502

Author Connect URL

https://authorconnect.aisnet.org/conferences/AMCIS2025/papers/1502

Comments

SIGSEC

Author Connect Link

Share

COinS
 
Aug 15th, 12:00 AM

Deepfakes Threats and the Deterrence of Aspiring Content Creators

The rise of deepfake technology has introduced significant privacy, ethical, and security concerns, particularly for individuals considering creating digital content. There has been research on the direct victims of deepfakes, but little attention has been paid to how the perceived threat of deepfakes discourages new creators from participating on digital platforms and limits the diversity of knowledge shared online. This study applies the Extended Parallel Process Model (EPPM) to examine how threat appraisal (perceived severity and susceptibility), coping appraisal (self-efficacy and response efficacy), and fear influence aspiring content creators’ intentions to adopt anonymity, pseudonymity, or withdraw from digital participation. Using a mixed-methods approach, we will conduct quantitative surveys and qualitative interviews to analyze these behavioral responses. The findings will provide insights into digital self-censorship, misinformation deterrence, and platform policies, contributing to discussions on cybersecurity and digital trust in the era of generative AI.

When commenting on articles, please be friendly, welcoming, respectful and abide by the AIS eLibrary Discussion Thread Code of Conduct posted here.