MiniMal: Hard-Label Adversarial Attack Against Static Malware Detection with Minimal Perturbation

Authors: Chengyi Li, Zhiyuan Jiang , Yongjun Wang , Tian Xia , Yayuan Zhang , Yuhang Mao

IJCAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experimental results indicate that Mini Mal achieves an attack success rate of over 98% against three leading machine learning detectors, improving performance by approximately 4.8% to 7.1% compared to stateof-the-art methods.
Researcher Affiliation Academia 1College of Computer Science and Technology, National University of Defense Technology, Changsha, China 2University of Southern California EMAIL, EMAIL
Pseudocode Yes Algorithm 1 outlines the complete process of using the PSO algorithm to optimize the perturbation content.
Open Source Code Yes Our source code and experimental data are available at https://github.com/2002lcy0401/MiniMal.
Open Datasets Yes For this study, we primarily sourced data from the publicly available Malware Detection PE-Based Dataset [Tuan et al., 2018], which has been widely used in previous work [Zhan et al., 2023b]. It includes five malware types: Locker, Mediyes, Winwebsec, Zbot, and Zeroaccess, as well as 1,000 benign software samples. Additionally, we downloaded other common malware types from the Malware Bazaar website[Malware Bazaar, 2024], including Trojan, Backdoor, and Ransomware.
Dataset Splits No To ensure compatibility with all target detectors, we select 2,642 malware samples and 473 benign samples that are correctly classified by the target detectors. The paper does not provide specific training/test/validation splits for these samples.
Hardware Specification Yes All experiments were conducted on a computer equipped with an NVIDIA Ge Force RTX 4070 and a Linux server featuring an AMD EPYC 9654 96-core processor.
Software Dependencies No We developed the prototype implementation of Mini Mal using Python. This statement only mentions the programming language without a version number or specific library versions.
Experiment Setup Yes Following prior work [He et al., 2024], we set the query budget to 500 and the maximum perturbation rate to 1000% for each method to fully utilize their performance.