Paper Type
Short
Abstract
User-generated content platforms (UGCPs) have amplified the rapid spread of deepfake content and misinformation, raising concerns about its impact on societal trust. While AI-generated content labels aim to mitigate these risks, their effectiveness remains uncertain. This study examines how AI disclosure labels influence users’ intention to share deepfake videos. Based on Truth-Default Theory (TDT), we investigate how content and context realism influence perceived believability and subsequent engagement with deepfakes. We will test whether AI labeling weakens the realism-believability relationship, disrupting users’ truth-default state, and reducing engagement. Using a mixed-design experimental study, we assess whether AI labels effectively limit deepfake interactions or if realism overrides their impact. Findings will inform misinformation mitigation strategies, platform policies, and AI disclosure effectiveness.
Recommended Citation
Akinyemi, John-Patrick; Chew, Shao Liam; Geeling, Sharon; Heuer, Marvin; WANG, QINHUI; Hassan, Nik; and Kude, Thomas, "Seeing Isn't Believing: AI Disclosure Labels and Sharing Behavior in the Era of Deepfakes" (2024). ICIS 2024 Proceedings. 3.
https://aisel.aisnet.org/icis2024/paperathon/paperathon/3
Seeing Isn't Believing: AI Disclosure Labels and Sharing Behavior in the Era of Deepfakes
User-generated content platforms (UGCPs) have amplified the rapid spread of deepfake content and misinformation, raising concerns about its impact on societal trust. While AI-generated content labels aim to mitigate these risks, their effectiveness remains uncertain. This study examines how AI disclosure labels influence users’ intention to share deepfake videos. Based on Truth-Default Theory (TDT), we investigate how content and context realism influence perceived believability and subsequent engagement with deepfakes. We will test whether AI labeling weakens the realism-believability relationship, disrupting users’ truth-default state, and reducing engagement. Using a mixed-design experimental study, we assess whether AI labels effectively limit deepfake interactions or if realism overrides their impact. Findings will inform misinformation mitigation strategies, platform policies, and AI disclosure effectiveness.
When commenting on articles, please be friendly, welcoming, respectful and abide by the AIS eLibrary Discussion Thread Code of Conduct posted here.