Smishing, a portmanteau of “SMS” and “phishing”, refers to fraudulent practices that use text messages to lure individuals into divulging personal information, such as passwords or credit card details. Smishing has led to significant financial losses for individuals and organizations and eroded trust in digital communication channels (Yeboah-Boateng & Amanor, 2014). Unlike traditional phishing attacks that rely on emails, smishing messages are particularly deceptive because they exploit the personal and immediate nature of SMS, often catching recipients off guard. Central to the efficacy of smishing is the use of sophisticated persuasion techniques that prey on human vulnerabilities (Cialdini & Cialdini, 2007; Frauenstein & Flowerday, 2020). These techniques, while diverse, share a common goal: to manipulate the recipient into taking action against their best interest. Despite the growing prevalence of smishing, current studies on detecting and preventing it are still in the early stages. This gap is partly attributable to the lack of coded public smishing datasets, which are crucial for advancing both theoretical understanding and practical countermeasures. Our prior study has made significant strides in responding to this challenge by coding 934 smishing messages according to Persuasion Theory. This study leverages the coded dataset and develops a machine-learning-based text analysis method to identify these persuasion techniques accurately. Furthermore, to understand the factors contributing to the effectiveness of persuasion techniques, this study leverages explainable AI (XAI) to determine which segments significantly influence the identification of specific persuasion techniques. XAI offers a range of methodologies designed to unravel the complexities of AI model operations and explain how the decisions are reached by AI models (Adadi & Berrada, 2018). Applying XAI allows us to bring transparency to our machine-learning model detection process and offers users insights into how and why certain messages are identified as deceptive. The enhanced transparency significantly boosts the effectiveness and reliability of our model. Arming users with the knowledge needed to better understand and counteract deceptive messages can enhance their trust in detection results. Further, to evaluate the effectiveness of XAI in improving user understanding of deceptive messages and reducing phishing susceptibility, we designed a survey study. Participants in the study will first judge whether or not ten smishing messages are deceptive. They will be presented sequentially at random. Afterward, we will present the XAI analysis of each message, allowing participants to reconsider their decisions. Finally, we will gather quantitative and qualitative feedback on how much they believe XAI helped them identify and prevent phishing threats.