Loading...
Paper Type
ERF
Description
As cyberattacks become more sophisticated and prevalent, analysts face several new challenges, such as a data influx and a high percentage of false alerts. Many AI-driven tools have been utilized in detection systems to help analysts find anomalies. However, the black-box nature of many AI models makes it difficult for analysts to utilize them effectively. This leads to a growing need for Explainable AI (XAI) to make AI models more transparent. However, there is a lack of personalized XAI that would enable analysts to receive explanations tailored to their needs. To address this problem, we develop a system that can predict an analyst’s need for explainability based on a Bayesian Network (BN). We first identify the factors that impact analysts' needs for explainability and then use a data-driven method to build the network structure. The performance of the system will be evaluated in an experiment involving twenty participants.
Paper Number
1129
Recommended Citation
Zhong, Chen; Ni, Qinwei; and Chen, Ping, "Predicting Analysts' Needs for Explainable Artificial Intelligence (XAI) in Cybersecurity Analysis" (2023). AMCIS 2023 Proceedings. 4.
https://aisel.aisnet.org/amcis2023/sig_sec/sig_sec/4
Predicting Analysts' Needs for Explainable Artificial Intelligence (XAI) in Cybersecurity Analysis
As cyberattacks become more sophisticated and prevalent, analysts face several new challenges, such as a data influx and a high percentage of false alerts. Many AI-driven tools have been utilized in detection systems to help analysts find anomalies. However, the black-box nature of many AI models makes it difficult for analysts to utilize them effectively. This leads to a growing need for Explainable AI (XAI) to make AI models more transparent. However, there is a lack of personalized XAI that would enable analysts to receive explanations tailored to their needs. To address this problem, we develop a system that can predict an analyst’s need for explainability based on a Bayesian Network (BN). We first identify the factors that impact analysts' needs for explainability and then use a data-driven method to build the network structure. The performance of the system will be evaluated in an experiment involving twenty participants.
When commenting on articles, please be friendly, welcoming, respectful and abide by the AIS eLibrary Discussion Thread Code of Conduct posted here.
Comments
SIG SEC