Understanding Model Ensemble in Transferable Adversarial Attack
Authors: Wei Yao, Zeliang Zhang, Huayi Tang, Yong Liu
ICML 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Finally, extensive experiments with 54 models validate our theoretical framework, representing a significant step forward in understanding transferable model ensemble adversarial attacks. |
| Researcher Affiliation | Academia | 1Gaoling School of Artificial Intelligence, Renmin University of China, Beijing, China 2Beijing Key Laboratory of Research on Large Models and Intelligent Governance 3Engineering Research Center of Next-Generation Intelligent Search and Recommendation, MOE 4Independent Researcher. Contributed ideas during the author s B.S. studies at Huazhong University of Science and Technology, Wuhan, China. Correspondence to: Yong Liu <EMAIL>. |
| Pseudocode | No | The paper describes methods using mathematical formulations and textual explanations but does not include any clearly labeled pseudocode blocks or algorithm listings. |
| Open Source Code | No | The paper does not contain an explicit statement about open-sourcing the code for the described methodology, nor does it provide a link to a code repository. |
| Open Datasets | Yes | We conduct our experiments on three datasets, including the MNIST (Le Cun, 1998), Fashion-MNIST (Xiao et al., 2017), and CIFAR-10 (Krizhevsky et al., 2009) datasets. |
| Dataset Splits | No | The paper mentions training on MNIST, Fashion-MNIST, and CIFAR-10, and using these to attack a ResNet-18. It specifies batch size and epochs, but does not provide explicit training, validation, or test dataset splits or refer to standard splits with citations for reproducibility. |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., GPU models, CPU types, or memory specifications) used for running the experiments. It only refers to 'deep neural networks' and 'models'. |
| Software Dependencies | No | The paper mentions using MI-FGSM, SVRE, and SIA attack methods and the Adam optimizer, but it does not specify any software libraries or frameworks with their version numbers (e.g., PyTorch 1.x, TensorFlow 2.x, Python 3.x). |
| Experiment Setup | Yes | For models trained on MNIST, Fashion-MNIST, we set the number of epochs as 10. For models trained on CIFAR-10, we set the number of epochs as 30. We use the Adam optimizer with setting the learning rate as 10e-3. We set the batch size as 64. |