Improved Rates of Differentially Private Nonconvex-Strongly-Concave Minimax Optimization

Authors: Ruijia Zhang, Mingxi Lei, Meng Ding, Zihang Xiang, Jinhui Xu, Di Wang

AAAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Finally, experiments on AUC maximization, generative adversarial networks, and temporal difference learning with real-world data support our theoretical analysis. ... In this section, we evaluate the effectiveness of our proposed Private Diff Minimax method. Due to space constraints, we focus on the AUC maximization experiment here. Additional experiments, including reinforcement learning and generative adversarial networks, are provided in the appendix. ... Table 1: Comparison of AUC performance in DP-SGDA and Private Diff Minimax on various datasets. ... Figure 1: Comparison of Gradient Norm, Gradient Variance, and AUC Performance between DP-SGDA and Private Diff.
Researcher Affiliation Academia 1 The Chinese University of Hong Kong, Shenzhen 2 Johns Hopkins University 3 State University of New York at Buffalo 4 King Abdullah University of Science and Technology CEMSE 5 Center of Excellence for Generative AI, KAUST
Pseudocode Yes Algorithm 1: Differentially Private Stochastic Gradient Descent Ascent (DP-SGDA) ... Algorithm 2: Clipping (x, C) ... Algorithm 3: Private Diff Minimax ... Algorithm 4: Mini-batch Stochastic Gradient Ascent (Mini-batch SGA)
Open Source Code No The paper does not contain any explicit statement about releasing source code, a link to a code repository, or mention of code in supplementary materials.
Open Datasets Yes Our experiments are based on two common datasets, MNIST and Fashion MNIST, which are transformed into binary classes by randomly partitioning the data into two groups.
Dataset Splits No The paper mentions creating imbalanced conditions: 'setting an imbalance ratio of 0.1 for training, where minority classes are underrepresented, and 0.5 for testing.' While this describes the composition within training and testing sets, it does not explicitly state the overall training/testing/validation split percentages or sample counts for the datasets used.
Hardware Specification No The paper does not provide specific details about the hardware (e.g., GPU models, CPU types, memory) used for running the experiments.
Software Dependencies No The paper does not provide specific software names with version numbers for its dependencies.
Experiment Setup Yes We set privacy budget ϵ = {0.5, 1, 5, 10} and δ = 1 n1.1 . A two-layer multilayer perceptron is used, consisting of 256 and 128 neurons, respectively.