Abstract

This study addresses the vulnerability of AI systems to adversarial attacks by extending the DeepFool algorithm. The paper proposes four new approaches and evaluates them according to a set of criteria. The methods are inspired by various optimisation algorithms. One of the proposed improvements adds the independent refinement stage, which reduces the final perturbation without extra gradient computations. Experimental results show that the appropriately modified algorithm reaches the decision boundary in fewer steps and with fewer gradient evaluations, while the refinement stage further decreases the magnitude of the perturbation. The combined approach can improve attack efficiency and reduce detectability, suggesting the potential for a wider application of advanced optimisation techniques in adversarial example generation.

Recommended Citation

Mikołajczyk, Ł., Duda, P., Nowicki, R. & Scherer, R. (2025). Improved DeepFool: Efficient Adversarial Attacks via Optimization and RefinementIn I. Luković, S. Bjeladinović, B. Delibašić, D. Barać, N. Iivari, E. Insfran, M. Lang, H. Linger, & C. Schneider (Eds.), Empowering the Interdisciplinary Role of ISD in Addressing Contemporary Issues in Digital Transformation: How Data Science and Generative AI Contributes to ISD (ISD2025 Proceedings). Belgrade, Serbia: University of Gdańsk, Department of Business Informatics & University of Belgrade, Faculty of Organizational Sciences. ISBN: 978-83-972632-1-5. https://doi.org/10.62036/ISD.2025.62

Paper Type

Full Paper

DOI

10.62036/ISD.2025.62

Share

COinS
 

Improved DeepFool: Efficient Adversarial Attacks via Optimization and Refinement

This study addresses the vulnerability of AI systems to adversarial attacks by extending the DeepFool algorithm. The paper proposes four new approaches and evaluates them according to a set of criteria. The methods are inspired by various optimisation algorithms. One of the proposed improvements adds the independent refinement stage, which reduces the final perturbation without extra gradient computations. Experimental results show that the appropriately modified algorithm reaches the decision boundary in fewer steps and with fewer gradient evaluations, while the refinement stage further decreases the magnitude of the perturbation. The combined approach can improve attack efficiency and reduce detectability, suggesting the potential for a wider application of advanced optimisation techniques in adversarial example generation.