Paper Type

Complete

Paper Number

PACIS2025-1392

Description

Smart contracts underpin decentralized applications yet pose unique security challenges due to their complex business logic. Existing vulnerability detection methods relying on pattern recognitions often fail to capture the underlying causal relationships. We propose ContractLLM, a novel framework that enhances vulnerability detection by incorporating line-level reasoning. Our approach leverages the explainability of causal code graphs and Graph Neural Networks to generate counterfactual scenarios that expose critical causal pathways. Our Causal Preference Optimization (CPO) algorithm refines LLM reasoning by reinforcing key inference pathways and penalizing misleading trajectories. Implemented on the Llama 3.1 8B Instruct model and evaluated against four state-of-the-art tools, ContractLLM achieves 92.7% accuracy, significantly outperforming existing approaches. Ablation studies confirm that these improvements stem from our framework, contributing a 61.5-point increase in line-level reasoning accuracy. These findings suggest that ContractLLM provides a causally driven approach to smart contract vulnerability detection and lays a foundation for advanced blockchain security analysis.

Comments

AI ML

Share

COinS
 
Jul 6th, 12:00 AM

ContractLLM: Managing Smart Contract Vulnerability with LLM Causal Reasoning

Smart contracts underpin decentralized applications yet pose unique security challenges due to their complex business logic. Existing vulnerability detection methods relying on pattern recognitions often fail to capture the underlying causal relationships. We propose ContractLLM, a novel framework that enhances vulnerability detection by incorporating line-level reasoning. Our approach leverages the explainability of causal code graphs and Graph Neural Networks to generate counterfactual scenarios that expose critical causal pathways. Our Causal Preference Optimization (CPO) algorithm refines LLM reasoning by reinforcing key inference pathways and penalizing misleading trajectories. Implemented on the Llama 3.1 8B Instruct model and evaluated against four state-of-the-art tools, ContractLLM achieves 92.7% accuracy, significantly outperforming existing approaches. Ablation studies confirm that these improvements stem from our framework, contributing a 61.5-point increase in line-level reasoning accuracy. These findings suggest that ContractLLM provides a causally driven approach to smart contract vulnerability detection and lays a foundation for advanced blockchain security analysis.