Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1]
Manipulating SGD with Data Ordering Attacks
Authors: I Shumailov, Zakhar Shumaylov, Dmitry Kazhdan, Yiren Zhao, Nicolas Papernot, Murat A. Erdogdu, Ross J Anderson
NeurIPS 2021 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We extensively evaluate our attacks on computer vision and natural language benchmarks to ο¬nd that the adversary can disrupt model training and even introduce backdoors. |
| Researcher Affiliation | Academia | Ilia Shumailov University of Cambridge & University of Toronto & Vector Institute EMAIL Zakhar Shumaylov University of Cambridge EMAIL Dmitry Kazhdan University of Cambridge EMAIL Yiren Zhao University of Cambridge EMAIL Nicolas Papernot University of Toronto & Vector Institute EMAIL Murat A. Erdogdu University of Toronto & Vector Institute EMAIL Ross Anderson University of Cambridge & University of Edinburgh EMAIL |
| Pseudocode | Yes | Algorithm 1: A high level description of the BRRR attack algorithm |
| Open Source Code | Yes | Codebase is available here: https://github.com/iliaishacked/sgd_datareorder |
| Open Datasets | Yes | We evaluate our attacks using two computer vision and one natural language benchmarks: the CIFAR-10, CIFAR-100 [19] and AGNews [37] datasets. |
| Dataset Splits | No | The paper discusses training and testing, and uses phrases like 'test dataset loss' and 'Test acc' in its tables, but it does not explicitly define or refer to a 'validation set' or provide specific percentages/counts for train/validation/test splits. |
| Hardware Specification | No | The paper does not explicitly describe the hardware used for experiments, such as specific GPU or CPU models, memory, or cloud computing instance types. |
| Software Dependencies | No | The paper mentions 'torchtext' for the AGNews model but does not specify version numbers for any software dependencies used in the experiments. |
| Experiment Setup | Yes | For CIFAR-10, we used 100 epochs of training with target model Res Net18 and surrogate model Le Net5, both trained with the Adam optimizer 0.1 learning rate and Ξ² = (0.99, 0.9). For CIFAR-100, we used 200 epochs of training with target model Res Net50 and surrogate model Mobilenet, trained with SGD with 0.1 learning rate, 0.3 moment and Adam respectively for real and surrogate models. AGNews were trained with SGD learning rate 0.1, 0 moments for 50 epochs with sparse mean Embedding Bags. |