Improving Generalization of Universal Adversarial Perturbation via Dynamic Maximin Optimization
Authors: Yechao Zhang, Yingzhe Xu, Junyu Shi, Leo Yu Zhang, Shengshan Hu, Minghui Li, Yanjun Zhang
AAAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Comprehensive experiments on the Image Net dataset demonstrate that the proposed DM-UAP markedly enhances both cross-sample universality and cross-model transferability of UAPs. |
| Researcher Affiliation | Academia | 1Huazhong University of Science and Technology 2Griffith University 3University of Technology Sydney EMAIL, EMAIL, EMAIL |
| Pseudocode | Yes | Algorithm 1: Dynamic Maximin UAP (DM-UAP) with Curriculum Learning |
| Open Source Code | Yes | Code https://github.com/yechao-zhang/DM-UAP |
| Open Datasets | Yes | Comprehensive experiments on the Image Net dataset demonstrate that the proposed DM-UAP markedly enhances both cross-sample universality and cross-model transferability of UAPs. |
| Dataset Splits | Yes | Setup: Following (Moosavi-Dezfooli et al. 2017; Liu et al. 2023), we randomly select 10 images from each category in the Image Net training set, resulting in a total of 10,000 images, for UAP generations. In addition, we also consider a data-limit setting, in which only 500 random images from the training set are sampled. Aligning with previous work, we evaluate our method on the Image Net validation set, which contains 50,000 images, using classical pre-trained CNN models Alex Net, Google Net, VGG16, VGG19, and Res Net152 as target models. |
| Hardware Specification | No | The paper does not provide specific hardware details such as GPU models, CPU types, or memory used for running the experiments. It only mentions memory consumption in Table 5 but not the actual hardware. |
| Software Dependencies | No | The paper mentions 'Adam' as an optimizer and refers to 'SGD' and 'momentum' methods, but does not provide specific version numbers for any software libraries (e.g., Python, PyTorch, TensorFlow, CUDA) used in the implementation. |
| Experiment Setup | Yes | Hyper-parameters Setting: We set the maximum perturbation budget ϵ of all methods as 10/255. Following SGA (Liu et al. 2023), the number of training epochs T is 20, and the batch size B is 125. The step numbers for inner model optimization Km and data optimization Kd in our method are both 10, with default neighborhood size ρ = 1 and r = 32. |