Everywhere Attack: Attacking Locally and Globally to Boost Targeted Transferability

Authors: Hui Zeng, Sanshuai Cui, Biwei Chen, Anjie Peng

AAAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments on Image Net demonstrate that the proposed approach universally improves the state-of-the-art targeted attacks by a clear margin, e.g., the transferability of the widely adopted Logit attack can be improved by 28.8% 300%. We also evaluate the crafted AEs on a real-world platform: Google Cloud Vision. Results further support the superiority of the proposed method.
Researcher Affiliation Academia Hui Zeng1,2 , Sanshuai Cui3 , Biwei Chen4 , Anjie Peng1 1Southwest university of science and technology, Mianyang, China 2Guangan institute of technology, Guangan, China 3City University of Macau, Macau, China 4Beijing normal university, Zhuhai, China
Pseudocode Yes Algorithm 1 summarizes the procedure of integrating the proposed everywhere scheme with the CE attack, where DI, TI, and MI are conventional transferability-enhanced methods.
Open Source Code Yes Code https://github.com/zengh5/Everywhere Attack
Open Datasets Yes Extensive experiments on Image Net demonstrate that the proposed approach universally improves the state-of-the-art targeted attacks by a clear margin Following recent work on targeted attacks, our experiments are conducted on the Image Net-compatible dataset comprised of 1000 images. Image Net-compatible. 2017. https://github.com/cleverhanslab/cleverhans/tree/master/cleverhans v3.1.0/examples/ nips17 adversarial competition. Online.
Dataset Splits No The paper mentions using "the Image Net-compatible dataset comprised of 1000 images" for experiments and that "DTUAP is applied to all 1000 images in our dataset". This indicates the entire dataset is used for evaluation rather than explicitly describing train/test/validation splits for their experimental setup or models they trained. The models used (Inc-v3, Res50, Den121, VGG16, Swin) are pretrained, and no new model training is described that would require a train/validation split from this dataset.
Hardware Specification No The paper does not provide specific hardware details such as GPU or CPU models used for running the experiments. It only describes the experimental settings related to the adversarial attack parameters.
Software Dependencies No The paper does not provide specific versions for any software dependencies, libraries, or frameworks used (e.g., Python, PyTorch, TensorFlow versions).
Experiment Setup Yes Parameters. For all attacks, the perturbations are restricted by L norm with ϵ = 16 (The results under lower budgets are provided in the supplementary material), and the step size is set to 2. The total iteration number T is set to 200 to balance speed and convergence. The number of partitions for each dimension M is set to 4, and the number of samples N is set to 9.