Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1]
A Survey on Transferability of Adversarial Examples Across Deep Neural Networks
Authors: Jindong Gu, Xiaojun Jia, Pau de Jorge, Wenqian Yu, Xinwei Liu, Avery Ma, Yuan Xun, Anjun Hu, Ashkan Khakzar, Zhijiang Li, Xiaochun Cao, Philip Torr
TMLR 2024 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Theoretical | This survey explores the landscape of the adversarial transferability of adversarial examples. We categorize existing methodologies to enhance adversarial transferability and discuss the fundamental principles guiding each approach. While the predominant body of research primarily concentrates on image classification, we also extend our discussion to encompass other vision tasks and beyond. Challenges and opportunities are discussed, highlighting the importance of fortifying DNNs against adversarial vulnerabilities in an evolving landscape. |
| Researcher Affiliation | Academia | 1 Torr Vision Group, University of Oxford, Oxford, United Kingdom 2 Nanyang Technological University, Singapore 3 Wuhan University, Wuhan, China 4 University of Chinese Academy of Sciences, Beijing, China 5 University of Toronto, Toronto, Canada 6 Sun Yat-sen University, Shenzhen, China |
| Pseudocode | No | The paper describes various methods using mathematical formulations and iterative update rules, such as those for I-FGSM and its variants in Section 3 and Appendix A. However, it does not include any explicitly labeled pseudocode blocks or algorithms with structured steps in a code-like format. |
| Open Source Code | No | In order to facilitate the literature search, we also built and released a project page where the related papers are organized and listed1. The page will be maintained and updated regularly. (1https://github.com/Jindong Gu/awesome_adversarial_transferability). This link refers to a project page for organizing related papers, not the source code for the methodology described in this survey paper. |
| Open Datasets | Yes | The experiments on large-scale datasets (e.g. Image Net-1k (Russakovsky et al., 2015)) are infeasible since the traditional algorithms are not scalable to large datasets. |
| Dataset Splits | No | As a survey paper, this work does not conduct its own experiments or analyze specific datasets with defined splits. It discusses datasets in the context of other research but does not provide split information for its own (non-existent) data partitioning. |
| Hardware Specification | No | As a survey paper, this work does not conduct its own experiments and therefore does not specify any hardware used for experimental runs. |
| Software Dependencies | No | As a survey paper, this work does not conduct its own experiments and thus does not list specific software dependencies with version numbers for replication. |
| Experiment Setup | No | As a survey paper, this work does not present its own experimental setup, hyperparameters, or system-level training settings. |