Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1]
Differentially Private Stochastic Optimization: New Results in Convex and Non-Convex Settings
Authors: Raef Bassily, Cristóbal GuzmÔn, Michael Menart
NeurIPS 2021 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Theoretical | We study differentially private stochastic optimization in convex and non-convex settings. For the convex case, we focus on the family of non-smooth generalized linear losses (GLLs). Our algorithm for the ā2 setting achieves optimal excess population risk in near-linear time, while the best known differentially private algorithms for general convex losses run in super-linear time. Our algorithm for the ā1 setting has nearly-optimal excess population risk O q nε , and circumvents the dimension dependent lower bound of [AFKT21] for general non-smooth convex losses. In the differentially private non-convex setting, we provide several new algorithms for approximating stationary points of the population risk. For the ā1-case with smooth losses and polyhedral constraint, we provide the ļ¬rst nearly dimension independent rate, O log2/3 d (nε)1/3 in linear time. For the constrained ā2-case with smooth losses, we obtain a linear-time algorithm with rate O 1 n1/3 + d1/5 (nε)2/5 . Finally, for the ā2-case we provide the ļ¬rst method for non-smooth weakly convex stochastic optimization with rate O 1 n1/4 + d1/6 (nε)1/3 which matches the best existing non-private algorithm when d = O( n). We also extend all our results above for the non-convex ā2 setting to the āp setting, where 1 < p 2, with only polylogarithmic (in the dimension) overhead in the rates. |
| Researcher Affiliation | Academia | Raef Bassily Department of Computer Science & Engineering Translational Data Analytics Institute (TDAI) The Ohio State University EMAIL Cristóbal GuzmĆ”n Department of Applied Mathematics University of Twente Inst. for Mathematical and Comput. Eng. Pontiļ¬cia Universidad Católica de Chile EMAIL Michael Menart Department of Computer Science & Engineering The Ohio State University EMAIL |
| Pseudocode | Yes | Algorithm 1 Apoly SFW: Private Polyhedral Stochastic Frank-Wolfe Algorithm |
| Open Source Code | No | The paper does not provide any explicit statements about open-sourcing the code for the described methodology, nor does it include links to a code repository. |
| Open Datasets | No | The paper presents theoretical analysis and algorithms for stochastic optimization; it does not report on empirical experiments using specific, publicly available datasets. It refers to a theoretical 'sample S = (z1, ..., zn) i.i.d. D'. |
| Dataset Splits | No | The paper is theoretical and does not involve empirical experiments with datasets. Therefore, no training/validation/test dataset splits are described. |
| Hardware Specification | No | The paper is theoretical and focuses on algorithm design and analysis. It does not describe empirical experiments, and therefore, no hardware specifications are mentioned. |
| Software Dependencies | No | The paper is theoretical and does not describe empirical experiments. Therefore, it does not list specific software dependencies with version numbers required for replication. |
| Experiment Setup | No | The paper is theoretical and focuses on algorithm design and analysis. It does not describe empirical experiments, and therefore, no specific experimental setup details like hyperparameters or training configurations are provided. |