Abstract

Content moderation on social media platforms, such as Facebook, plays a critical role in curating user-generated content by mitigating harmful, misleading, or inappropriate materials. Despite significant investments in both human and algorithmic moderation systems, platforms continue to struggle with effectively managing the vast and complex flow of content, often facing criticism from various user groups for perceived inconsistencies and failures. This paper provides a systems-theory analysis of Facebook’s content moderation system, with a particular focus on its sociotechnical nature, where human and non-human actors interact to moderate content. Using the COVID-19 pandemic as a revelatory case study, this research explores the disruptions to Facebook’s moderation processes caused by external shocks and identifies key conflicts within the system. Through the lens of systems science, the study reveals several points of tension, including conflicting goals between stakeholders, breakdowns in communication, and the challenges posed by both human and algorithmic moderators. By understanding these conflicts, this paper offers insights into why content moderation continues to be a significant challenge for social media platforms and proposes recommendations for improving moderation efforts. This study contributes to the broader discourse on content moderation by demonstrating the value of applying systems theory to analyze and address the complexities of these sociotechnical systems.

DOI

10.17705/1jais.00918

Share

COinS