Abstract

The proliferation of harmful content on social media, such as hate speech, extremist propaganda, and radical ideology, fosters harmful attitudes threatening social stability. Existing moderation mechanisms are predominantly reactive, addressing harmful content after dissemination, which limits their effectiveness. This paper proposes a preventive approach by adapting a prebunking intervention, which strengthens individuals’ cognitive immunology. We aimed to induce a future-oriented mindset as a protective cognitive frame against harmful authoritarian content. To evaluate this approach, we conducted two online experiments testing the effectiveness of a future-oriented prebunking intervention against authoritarian fringe content, compared with netiquette education and controls. Results indicate that prebunking significantly reduces authoritarian attitudes and increases intentions to report harmful content, outperforming netiquette education. Our findings contribute to Information Systems research by expanding current, reactive content moderation mechanisms with proactive prebunking and advancing cognitive immunology by incorporating a future-oriented mindset, as well as applying prebunking to harmful authoritarian content.

Share

COinS