Paper Type
Short
Paper Number
PACIS2025-1371
Description
Short video platforms, such as TikTok, have gained immense popularity, but also contain harmful content like violence, threatening minors’ mental health. This paper proposes a Child-Attentive Multimodal Multitask Learning (CAMML) method for accurate violent short video detection. Unlike existing methods that neglect text cues, correlations with other harmful content, and children’s unique cognitive characteristics, CAMML integrates visual, auditory, and text modalities. It features a child-specific attention mechanism and a multi-task learning approach, jointly training violent video classification alongside tasks like detecting unpleasant and obscene content. Experiments on the MOB dataset that targets malicious and benign content in children’s videos demonstrate CAMML’s superior performance, achieving a 90.02% AUC. The method provides a robust solution for filtering violent content, fostering a clear online environment for children.
Recommended Citation
Zhao, Chenxing; Yang, Liang; Kuang, Junwei; and Yan, Zhijun, "Protecting Children from Violent Short Videos: A Child-Attentive Multimodal Multitask Learning Approach" (2025). PACIS 2025 Proceedings. 19.
https://aisel.aisnet.org/pacis2025/aiandml/aiandml/19
Protecting Children from Violent Short Videos: A Child-Attentive Multimodal Multitask Learning Approach
Short video platforms, such as TikTok, have gained immense popularity, but also contain harmful content like violence, threatening minors’ mental health. This paper proposes a Child-Attentive Multimodal Multitask Learning (CAMML) method for accurate violent short video detection. Unlike existing methods that neglect text cues, correlations with other harmful content, and children’s unique cognitive characteristics, CAMML integrates visual, auditory, and text modalities. It features a child-specific attention mechanism and a multi-task learning approach, jointly training violent video classification alongside tasks like detecting unpleasant and obscene content. Experiments on the MOB dataset that targets malicious and benign content in children’s videos demonstrate CAMML’s superior performance, achieving a 90.02% AUC. The method provides a robust solution for filtering violent content, fostering a clear online environment for children.
Comments
AI ML