•  
  •  
 

Management Information Systems Quarterly

Abstract

Artificial intelligence (AI) is being widely adopted in modern cyber defense to weave automation and scalability into the operational fabric of cybersecurity firms. Today, AI aids in crucial cyber defense tasks such as malware and intrusion detection to keep information technology (IT) infrastructure secure. Despite their value, cyber defense AI agents can be vulnerable to adversarial attacks. In these attacks, the adversary deliberately manipulates a malicious input by taking a sequence of actions so that a targeted cyber defense AI agent fails to correctly determine its maliciousness. Consequently, the robustness of cyber defense AI agents has raised deep concerns in modern cyber defense. Drawing on the computational design science paradigm, we couple robust optimization and reinforcement learning theories to develop a novel framework, called reinforcement learning-based adversarial attack robustness (RADAR), to increase the robustness of cyber defense AI agents against adversarial attacks. To demonstrate practical utility, we instantiate RADAR for malware attacks—the primary cause of financial loss in cyber attacks. We rigorously evaluate the performance of RADAR as a situated IT artifact against state-of-the-art machine learning and deep learning-based benchmark methods. Incorporating RADAR in three renowned malware detectors shows an adversarial robustness increase of up to seven times, on average. We conclude by discussing contributions to information system research as well as implications for cyber defense stakeholders.

Share

COinS