Pixel2Feature Attack (P2FA): Rethinking the Perturbed Space to Enhance Adversarial Transferability
Authors: Renpu Liu, Hao Wu, Jiawei Zhang, Xin Cheng, Xiangyang Luo, Bin Ma, Jinwei Wang
ICML 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Numerous experimental results strongly demonstrate the superior transferability of P2FA over State-Of-The-Art (SOTA) attacks. Extensive experiments on the Image Net-NIPS dataset (Kurakin et al., 2018) demonstrate that P2FA significantly improve adversarial transferability. |
| Researcher Affiliation | Academia | 1School of Computer Science, Nanjing University of Information Science and Technology, Nanjing, China 2State Key Laboratory of Mathematical Engineering and Advanced Computing, Zhengzhou, China 3School of Cyber Security and Information Law, Chongqing University of Posts and Telecommunications, Chongqing, China 4School of Cyber Security, Qilu University of Technology (Shandong Academy of Sciences), Shandong, China 5College of Cryptology and Cyber Science, Nankai University, Tianjing, China. Correspondence to: Jinwei Wang <wjwei EMAIL>, Xiangyang Luo <luoxy EMAIL>. |
| Pseudocode | Yes | Algorithm 1 Pixel2Feature Attack |
| Open Source Code | Yes | Code is available at: https://github.com/WH-Lrp/P2FA. |
| Open Datasets | Yes | Dataset. For a fair comparison, we adhere to previous work by utilizing the Image Net-NIPS dataset (Kurakin et al., 2018), which comprises 1000 images for the NIPS 2017 adversarial competition. |
| Dataset Splits | No | The paper states it uses the Image Net-NIPS dataset, which comprises 1000 images for the NIPS 2017 adversarial competition. However, it does not provide specific training/test/validation dataset splits (e.g., percentages, sample counts, or explicit standard split references) for these 1000 images. |
| Hardware Specification | No | The paper does not explicitly mention specific hardware details such as GPU models, CPU models, or memory used for running experiments. |
| Software Dependencies | No | The paper describes various parameter settings and model configurations but does not provide specific software dependencies (e.g., programming languages, libraries, or frameworks) with version numbers that would be needed for replication. |
| Experiment Setup | Yes | Implementation Details. For a fair comparison, we adhere to the parameter settings of FIA (Wang et al., 2021). Specifically, we set the maximum perturbation to ϵ = 16 and the number of integrations steps for the aggregated gradient to N = 30. In addition, we set the decay factor to µ = 1.0 for all the baselines, as they are optimized using the momentum method. For the input transformation methods, we set the transformation probability of the DIM to 0.7, the amplification factor of the PIM to 2.5 and the kernel size to 3. We select consistent intermediate layers for the feature-level attacks: Mixed 5b for the Inc-v3 model, feature.6 for the Inc-v4 model, Conv2d 4a for the Inc Res-v2 model and the last layer of block2 for the Res-152 model. For the proposed P2FA, We set the step size to 105, the number of perturbations to 3, and utilize the feature importance derived from BFA. |