Differentially Private Optimizers Can Learn Adversarially Robust Models

Authors: Zhiqi Bu, Yuan Zhang

TMLR 2023 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We show both theoretically and empirically that DP models are Pareto optimal on the accuracyrobustness tradeoff. Empirically, the robustness of DP models is consistently observed across various datasets and models.
Researcher Affiliation Academia No institutional affiliations or domain-specific email addresses are provided for the authors, only generic gmail addresses. Therefore, a definitive classification of affiliation type is not possible based on the provided text. The result '0' is a placeholder due to the strict schema requirement for an integer value.
Pseudocode No The paper describes theoretical analysis and experimental results, but it does not contain any structured pseudocode or algorithm blocks. The methods are described using mathematical formulations and descriptive text.
Open Source Code No The paper states, 'The experiment can be reproduced using the DP vision codebase Private Vision by (Bu et al., 2022a).' This refers to a codebase developed in a separate, previously published work (Bu et al., 2022a), which is used for experiments described here. However, there is no explicit statement that the specific code for the methodology and experiments presented *in this paper* is being released or is available.
Open Datasets Yes The paper explicitly mentions and cites several well-known public datasets used in the experiments: CIFAR10 (Krizhevsky et al., 2009), MNIST (Le Cun et al., 1998), Fashion MNIST (Xiao et al., 2017), Celeb A (Liu et al., 2015), CIFAR100 (Krizhevsky et al., 2009), SVHN (Netzer et al., 2011), LFW (Huang et al., 2008), and ImageNet (Deng et al., 2009).
Dataset Splits Yes The paper states, 'We use the same setting as in Tramer & Boneh (2020)' for experiments on CIFAR10 (Figure 5) and explicitly mentions adopting natural hyperparameters from Tramer & Boneh (2020) in Appendix D for CIFAR10, MNIST, and Fashion MNIST experiments. This implies that the dataset splits, as part of the experimental setting, are adopted from this referenced work.
Hardware Specification Yes The paper explicitly states: 'We use one Nvidia GTX 1080Ti GPU and the Renyi privacy accountant to calculate the privacy loss.'
Software Dependencies No The paper refers to using 'Pytorch image models' and mentions various models and optimizers (DP-SGD, DP-Adam, DP-RMSprop), but it does not provide specific version numbers for any programming languages, libraries, or software dependencies used in their experimental setup (e.g., Python, PyTorch, TensorFlow, CUDA versions).
Experiment Setup Yes The paper provides extensive details about the experimental setup, including specific hyperparameters and training configurations. For example, in the abstract, it mentions 'l2(0.5) attack... l (4/255) attack on CIFAR10 with ϵ = 2'. Table 1 details 'hyperparameter η' and 'under (ϵ, δ) =(2,1e-5) and attacked by 20 steps of l (2/255) PGD'. Appendix D, 'Hyper-parameter setup', further specifies optimizers (DP-SGD, SGD, DP-RMSprop), learning rates (ηDP, ηnonDP with concrete values like 4, 0.4, 0.0002), clipping norms (R = 0.1, 0.0625, 0.25), momentum (0.9), batch sizes (1024, 512, 2048), epochs (5, 10, 40, 50), privacy parameters (ϵ, δ), and adversarial attack parameters (l PGD, l2 PGD, 20 steps, alpha values like 0.1, 1/255).