Abstract
Enhancing User Adoption of Intrusion Detection and Prevention Systems (IDPS) through Explainable Artificial Intelligence: An Integration of User Trust into the ECT and UTAUT2 Frameworks By Akeem A Bakare PhD Candidate Department of Information Science and Systems, Morgan State University Akeem.bakare@morgan.edu ABSTRACT The rapid advancement of digital technologies has dramatically increased the complexity and sophistication of cyber threats, rendering traditional security measures insufficient and elevating the importance of Intrusion Detection and Prevention Systems (IDPS) enhanced with Machine Learning or Artificial Intelligence (ML/AI). As cyberattacks grow more dynamic and unpredictable, the need to strengthen our defenses has never been more urgent. Protecting our organizations demands more than static solutions; it requires innovation. IDPS powered by ML provides a vital, proactive layer of defense. By embracing ML-enhanced IDPS, we can stay one step ahead, safeguarding not just our networks, but the future of our digital landscape. Despite their technical advantages, the widespread adoption of ML-enhanced IDPS is impeded by the intrinsic opacity of Machine Learning algorithms, which often function as "black boxes," providing minimal insight into their internal decision-making processes. This lack of interpretability undermines user trust, a critical determinant of operational reliance on such advanced security technologies. Recent progress in the field of Explainable Artificial Intelligence (XAI) aims to mitigate these challenges by improving the transparency and comprehensibility of ML models within IDPS frameworks. Through the integration of XAI techniques, these systems can offer intelligible, user-centric explanations for their outputs, thereby fostering increased trust and facilitating broader acceptance across organizational contexts. This integration aligns with the Unified Theory of Acceptance and Use of Technology 2 (UTAUT2), which highlights the role of performance expectancy, effort expectancy, and social influence as significant predictors of technology adoption. This study explores how XAI can improve user adoption rates of ML-based IDPS by fostering trust through transparency. The research is grounded in the UTAUT2 and Expectation Confirmation Theory (ECT) frameworks. Furthermore, the study will utilize empirical methods to assess the real-world applicability of XAI-enhanced IDPS, determinants of user trust, and systems adoption. Specifically, this research aims to elucidate the pivotal role of explainability in AI-based IDPS and formulate actionable strategies to enhance user trust, greater transparency, and widespread system adoption. In doing so, this research aims to bridge the gap between technological capability and user accessibility, thereby enhancing the operational efficacy and security posture of organizations employing these advanced systems. Keywords: AI-based IDPS, XAI ML-based IDPS, XAI, Trust, IT adoption, UTUAT2, ECT, Cybersecurity
Recommended Citation
Bakare, Akeem Abayomi, "Enhancing User Adoption of Intrusion Detection and Prevention Systems (IDPS) through Explainable Artificial Intelligence: An Integration of User Trust into the ECT and UTAUT2 Frameworks" (2025). AMCIS 2025 TREOs. 218.
https://aisel.aisnet.org/treos_amcis2025/218
Comments
tpp1046