AIM: Additional Image Guided Generation of Transferable Adversarial Attacks

Authors: Teng Li, Xingjun Ma, Yu-Gang Jiang

AAAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We conduct comprehensive experiments under both targeted and untargeted attack settings to demonstrate the efficacy of our proposed approach. We conduct extensive experiments to evaluate the efficacy of our proposed approach. Our results show that it achieves superior transferability for targeted attacks and performs on par with state-of-the-art methods for untargeted attacks.
Researcher Affiliation Academia Teng Li, Xingjun Ma, Yu-Gang Jiang Shanghai Key Lab of Intell. Info. Processing, School of CS, Fudan University
Pseudocode No The paper describes methods and equations, but does not include any clearly labeled pseudocode or algorithm blocks.
Open Source Code Yes Code https://terrytengli.com/s/Ce83N
Open Datasets Yes We train the adversarial generator using the Image Net dataset (Deng et al. 2009)... assess adversarial transferability across three distinct datasets: CUB-200 (Wah et al. 2011), Stanford Cars (Krause et al. 2013), and Oxford Flowers (Nilsback and Zisserman 2008).
Dataset Splits No The paper uses well-known datasets like ImageNet, CUB-200, Stanford Cars, and Oxford Flowers but does not explicitly provide specific training/test/validation dataset splits (percentages, sample counts, or explicit standard split citations for their specific experimental setup) within the main text.
Hardware Specification No The paper does not explicitly describe the specific hardware (e.g., GPU models, CPU models, memory) used to run its experiments.
Software Dependencies No The paper mentions software components like the Adam optimizer, ResNet generator, and Torchvision model zoo, but does not provide specific version numbers for these or any other ancillary software dependencies.
Experiment Setup Yes The training process employs the Adam optimizer with a learning rate of 2e 4. We set momentum decay factors at 0.5 and 0.999. We train the generator for 1 epoch with a batch size of 16. As for the attack settings, we establish the following parameters: an attack budget of ϵ = 16/255 for targeted settings and ϵ = 10/255 for untargeted settings.