Abstract

Deepfakes embody a double-edged sword, offering both creative innovation and harmful deception. This study examines how individuals evaluate deepfakes by focusing on perceived risk and perceived benefit as pivotal outcomes. Drawing on Rational Choice Theory and the Hunt–Vitell model of ethical decision making, we develop a perception-centered framework in which ethical concern and social approval mediate the influence of deepfake orientation (benevolent vs. malevolent), and purpose (utilitarian vs. hedonic) moderates these pathways. Using a 2 × 2 factorial vignette experiment, we find that malevolent deepfakes decrease perceived benefits and increase perceived risks, with these effects transmitted primarily through ethical concern. Social approval also mediates effects on perceived benefits, but only under utilitarian conditions. These findings highlight how utilitarian framings sharpen deontological and teleological evaluations, while hedonic contexts diffuse them.

Share

COinS