Paper Type

ERF

Abstract

This paper proposes a novel approach to enhancing cybersecurity threat detection by integrating counterfactual reasoning with large language models (LLMs) through a structured "what-if" ontology. Traditional AI-based systems often function as black boxes, identifying threats without offering causal explanations or scenario reasoning. Our framework enables LLMs to simulate hypothetical attack scenarios and assess alternative outcomes, thereby improving detection accuracy and interpretability. Grounded in the TOVE ontology engineering methodology, the system aims to formalize key cybersecurity entities, causal relations, and counterfactual conditions using languages like OWL and SWRL. We evaluate the framework based on metrics such as detection accuracy, narrative quality, and reasoning robustness. By unifying theoretical foundations from causal reasoning, scenario planning, and facets of explainable AI, our ontology serves as a semantic backbone for LLM-guided analysis. This work contributes a proactive, explainable, and extensible model for anticipating cyber threats and guiding defensive strategies, with implications for future research and implementation in intelligent threat detection systems.

Paper Number

2186

Author Connect URL

https://authorconnect.aisnet.org/conferences/AMCIS2025/papers/2186

Comments

SIGODIS

Author Connect Link

Share

COinS
 
Aug 15th, 12:00 AM

Enhancing Cybersecurity Threat Detection with Counterfactual Reasoning: A 'What-If' Ontology Approach Using Large Language Models

This paper proposes a novel approach to enhancing cybersecurity threat detection by integrating counterfactual reasoning with large language models (LLMs) through a structured "what-if" ontology. Traditional AI-based systems often function as black boxes, identifying threats without offering causal explanations or scenario reasoning. Our framework enables LLMs to simulate hypothetical attack scenarios and assess alternative outcomes, thereby improving detection accuracy and interpretability. Grounded in the TOVE ontology engineering methodology, the system aims to formalize key cybersecurity entities, causal relations, and counterfactual conditions using languages like OWL and SWRL. We evaluate the framework based on metrics such as detection accuracy, narrative quality, and reasoning robustness. By unifying theoretical foundations from causal reasoning, scenario planning, and facets of explainable AI, our ontology serves as a semantic backbone for LLM-guided analysis. This work contributes a proactive, explainable, and extensible model for anticipating cyber threats and guiding defensive strategies, with implications for future research and implementation in intelligent threat detection systems.

When commenting on articles, please be friendly, welcoming, respectful and abide by the AIS eLibrary Discussion Thread Code of Conduct posted here.