Paper Type
ERF
Abstract
The rise of deepfake technology has introduced significant privacy, ethical, and security concerns, particularly for individuals considering creating digital content. There has been research on the direct victims of deepfakes, but little attention has been paid to how the perceived threat of deepfakes discourages new creators from participating on digital platforms and limits the diversity of knowledge shared online. This study applies the Extended Parallel Process Model (EPPM) to examine how threat appraisal (perceived severity and susceptibility), coping appraisal (self-efficacy and response efficacy), and fear influence aspiring content creators’ intentions to adopt anonymity, pseudonymity, or withdraw from digital participation. Using a mixed-methods approach, we will conduct quantitative surveys and qualitative interviews to analyze these behavioral responses. The findings will provide insights into digital self-censorship, misinformation deterrence, and platform policies, contributing to discussions on cybersecurity and digital trust in the era of generative AI.
Paper Number
1502
Recommended Citation
Abdolhossein Khani, Ghazal and Baham, Corey, "Deepfakes Threats and the Deterrence of Aspiring Content Creators" (2025). AMCIS 2025 Proceedings. 50.
https://aisel.aisnet.org/amcis2025/sig_sec/sig_sec/50
Deepfakes Threats and the Deterrence of Aspiring Content Creators
The rise of deepfake technology has introduced significant privacy, ethical, and security concerns, particularly for individuals considering creating digital content. There has been research on the direct victims of deepfakes, but little attention has been paid to how the perceived threat of deepfakes discourages new creators from participating on digital platforms and limits the diversity of knowledge shared online. This study applies the Extended Parallel Process Model (EPPM) to examine how threat appraisal (perceived severity and susceptibility), coping appraisal (self-efficacy and response efficacy), and fear influence aspiring content creators’ intentions to adopt anonymity, pseudonymity, or withdraw from digital participation. Using a mixed-methods approach, we will conduct quantitative surveys and qualitative interviews to analyze these behavioral responses. The findings will provide insights into digital self-censorship, misinformation deterrence, and platform policies, contributing to discussions on cybersecurity and digital trust in the era of generative AI.
When commenting on articles, please be friendly, welcoming, respectful and abide by the AIS eLibrary Discussion Thread Code of Conduct posted here.
Comments
SIGSEC