Online extremism and radicalisation on social media (SM) are significant concerns of governments, SM companies, and society. The 2021 attack on the US Capitol illustrates the severity of extremism fulled through SM communications. The literature suggests the removal of extremist messages from SM to limit online extremism. However, scholars argue that these interventions are ineffective in containing the threat of extremist messages on SM. This study draws on dual-process theory and reactance theory to conceptualize the factors that contribute to limiting online extremism. Our model proposes cognitive factors and socio-technical factors that impact how SM users respond to online extremist messages. The model is tested with an Artificial Intelligence (AI)-based automated software agents (bots). The research contributes to a novel understanding of bots as social bots that are programmed as interventions to extremism.