Management Information Systems Quarterly
Abstract
The rise of inappropriate content (e.g., misinformation, spam, hate speech, etc.) has become a major concern for social media platforms. To deal with such challenges, platforms adopt various strategies to moderate the content on their websites. This study focuses on user bans, a common but controversial moderation strategy that suspends rule-violating users from further participation on a platform for a predetermined period. Specifically, we investigated the impacts of user bans on banned users’ content-generating behavior (both quantity and quality). Leveraging reactance theory, we formalized our hypotheses relating users’ behavioral reactions to this content moderation strategy. We implemented multiple empirical designs to analyze data from a major social media platform. Our results show that users provided more answers, on average, after bans were lifted. In contrast, we found that the quality of the content (measured by linguistic features and content appropriateness) decreased after user bans. Furthermore, we found that platform recognitions, such as badges and recommendations, alleviated individuals’ reactance toward bans. Specifically, users who have received platform recognitions reduced inappropriate postings and improved the quality of their content after bans. Lastly, we explored the heterogeneous effects of user bans for different banning causes and repeated bans. Our research is among the first to evaluate the effectiveness of user bans and has important implications for content moderation on social media.