Location
Hilton Hawaiian Village, Honolulu, Hawaii
Event Website
https://hicss.hawaii.edu/
Start Date
3-1-2024 12:00 AM
End Date
6-1-2024 12:00 AM
Description
The increasing prevalence of social media platforms has led to the emergence of multimodal information such as memes. Hateful memes poses a risk by perpetuating discrimination, reinforcing stereotypes, and causing online harassment, thereby marginalising certain groups and impeding efforts towards inclusivity and social justice. Detecting hateful memes is crucial for creating a safe and equitable online environment. However, existing research heavily relies on complex and large deep learning models, requiring substantial computational resources for training. This creates a barrier for under-resourced researchers and small companies, limiting their participation in hateful information detection and exacerbating inequalities in the field of artificial intelligence. This paper attempts to tackle the problem by proposing a low-resource- oriented framework of hateful meme classification to address limitations in training data, computing power, and modality integration. Our approach achieves faster performance with reduced computational requirements, while maintaining a 94.7% accuracy comparable to the existing highest-scoring model.
Recommended Citation
Li, Yuming; Chan, Johnny; Peko, Gabrielle; and Sundaram, David, "Towards resource inequities in catching the “dark side” of social media: A hateful meme classification framework for low-resource scenarios" (2024). Hawaii International Conference on System Sciences 2024 (HICSS-57). 3.
https://aisel.aisnet.org/hicss-57/sj/social_media/3
Towards resource inequities in catching the “dark side” of social media: A hateful meme classification framework for low-resource scenarios
Hilton Hawaiian Village, Honolulu, Hawaii
The increasing prevalence of social media platforms has led to the emergence of multimodal information such as memes. Hateful memes poses a risk by perpetuating discrimination, reinforcing stereotypes, and causing online harassment, thereby marginalising certain groups and impeding efforts towards inclusivity and social justice. Detecting hateful memes is crucial for creating a safe and equitable online environment. However, existing research heavily relies on complex and large deep learning models, requiring substantial computational resources for training. This creates a barrier for under-resourced researchers and small companies, limiting their participation in hateful information detection and exacerbating inequalities in the field of artificial intelligence. This paper attempts to tackle the problem by proposing a low-resource- oriented framework of hateful meme classification to address limitations in training data, computing power, and modality integration. Our approach achieves faster performance with reduced computational requirements, while maintaining a 94.7% accuracy comparable to the existing highest-scoring model.
https://aisel.aisnet.org/hicss-57/sj/social_media/3