Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1]

Toward Efficient Robust Training against Union of $\ell_p$ Threat Models

Authors: Gaurang Sriramanan, Maharshi Gor, Soheil Feizi

NeurIPS 2022 | Venue PDF | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We first present results obtained using the Res Net-18 [He et al., 2016] architecture on CIFAR-10 in Table-1. In the first partition of the table, we present models trained solely on the ℓ1 threat model.
Researcher Affiliation Academia Gaurang Sriramanan, Maharshi Gor, Soheil Feizi Department of Computer Science University of Maryland, College Park EMAIL
Pseudocode Yes Algorithm 1 Nuclear Curriculum Adversarial Training for ℓp Norm Robustness
Open Source Code Yes 1Our code and pre-trained models are available here: https://github.com/Gaurang Sriramanan/NCAT.
Open Datasets Yes In this work, we primarily consider the CIFAR-10 [Krizhevsky et al., 2009] and Image Net-100 [Russakovsky et al., 2014] datasets, since they have come to form the benchmark for comparative analysis of adversarially robust models.
Dataset Splits Yes Figure 1: Catastrophic Overfitting in ℓ1 Adversarial Training: ... adversarial accuracy (loss) is high (low) on the train set, while being close to zero (high) for validation images. ... Also, checklist item 3(b) states: 'Did you specify all the training details (e.g., data splits, hyperparameters, how they were chosen)? [Yes] Details included in the Supplementary Material'.
Hardware Specification No The paper states in its checklist (3d) that details for the total amount of compute and type of resources used are included in the Supplementary Material, but these details are not provided in the main body of the paper.
Software Dependencies No The paper does not specify software names with version numbers in its main text.
Experiment Setup No The paper states in its checklist (3b) that all training details (e.g., data splits, hyperparameters, how they were chosen) are included in the Supplementary Material, but these specific details are not provided in the main body of the paper.