Paper Type
Complete
Paper Number
PACIS2025-1392
Description
Smart contracts underpin decentralized applications yet pose unique security challenges due to their complex business logic. Existing vulnerability detection methods relying on pattern recognitions often fail to capture the underlying causal relationships. We propose ContractLLM, a novel framework that enhances vulnerability detection by incorporating line-level reasoning. Our approach leverages the explainability of causal code graphs and Graph Neural Networks to generate counterfactual scenarios that expose critical causal pathways. Our Causal Preference Optimization (CPO) algorithm refines LLM reasoning by reinforcing key inference pathways and penalizing misleading trajectories. Implemented on the Llama 3.1 8B Instruct model and evaluated against four state-of-the-art tools, ContractLLM achieves 92.7% accuracy, significantly outperforming existing approaches. Ablation studies confirm that these improvements stem from our framework, contributing a 61.5-point increase in line-level reasoning accuracy. These findings suggest that ContractLLM provides a causally driven approach to smart contract vulnerability detection and lays a foundation for advanced blockchain security analysis.
Recommended Citation
Luo, Ning; Zheng, Zhiqiang; and Samtani, Sagar, "ContractLLM: Managing Smart Contract Vulnerability with LLM Causal Reasoning" (2025). PACIS 2025 Proceedings. 7.
https://aisel.aisnet.org/pacis2025/aiandml/aiandml/7
ContractLLM: Managing Smart Contract Vulnerability with LLM Causal Reasoning
Smart contracts underpin decentralized applications yet pose unique security challenges due to their complex business logic. Existing vulnerability detection methods relying on pattern recognitions often fail to capture the underlying causal relationships. We propose ContractLLM, a novel framework that enhances vulnerability detection by incorporating line-level reasoning. Our approach leverages the explainability of causal code graphs and Graph Neural Networks to generate counterfactual scenarios that expose critical causal pathways. Our Causal Preference Optimization (CPO) algorithm refines LLM reasoning by reinforcing key inference pathways and penalizing misleading trajectories. Implemented on the Llama 3.1 8B Instruct model and evaluated against four state-of-the-art tools, ContractLLM achieves 92.7% accuracy, significantly outperforming existing approaches. Ablation studies confirm that these improvements stem from our framework, contributing a 61.5-point increase in line-level reasoning accuracy. These findings suggest that ContractLLM provides a causally driven approach to smart contract vulnerability detection and lays a foundation for advanced blockchain security analysis.
Comments
AI ML