Paper Number
ICIS2025-1880
Paper Type
Complete
Abstract
Deepfakes—AI-generated synthetic media—pose growing challenges for information integrity, yet little is known about how professionals across sectors assess their impact and the role of detection tools. This study uses an action design research (ADR) approach to investigate how practitioners in journalism, security, finance, and related fields evaluate deepfakes and define requirements for effective detection systems. Through expert interviews, we identify sector-specific perspectives and a shared concern over black-box tools lacking transparency. Our findings underscore the need for user-centered, context-sensitive, and trustworthy system design. We contribute to human-AI interaction and IS research by (1) offering empirical insights into practitioner perceptions, (2) outlining design principles for multimodal detection tools, and (3) showing how ADR can inform socio-technical systems in emerging AI domains. This work bridges technical capabilities with real-world needs to support the evaluation of digital content authenticity.
Recommended Citation
Bezzaoui, Isabel; Jarvers, Louis; Fegert, Jonas; and Weinhardt, Christof, "Designing Deepfake Detection Systems: Practitioner Requirements Across Sectors" (2025). ICIS 2025 Proceedings. 20.
https://aisel.aisnet.org/icis2025/hti/hti/20
Designing Deepfake Detection Systems: Practitioner Requirements Across Sectors
Deepfakes—AI-generated synthetic media—pose growing challenges for information integrity, yet little is known about how professionals across sectors assess their impact and the role of detection tools. This study uses an action design research (ADR) approach to investigate how practitioners in journalism, security, finance, and related fields evaluate deepfakes and define requirements for effective detection systems. Through expert interviews, we identify sector-specific perspectives and a shared concern over black-box tools lacking transparency. Our findings underscore the need for user-centered, context-sensitive, and trustworthy system design. We contribute to human-AI interaction and IS research by (1) offering empirical insights into practitioner perceptions, (2) outlining design principles for multimodal detection tools, and (3) showing how ADR can inform socio-technical systems in emerging AI domains. This work bridges technical capabilities with real-world needs to support the evaluation of digital content authenticity.
When commenting on articles, please be friendly, welcoming, respectful and abide by the AIS eLibrary Discussion Thread Code of Conduct posted here.
Comments
15-Interaction