Differential Private Stochastic Optimization with Heavy-tailed Data: Towards Optimal Rates

Authors: Puning Zhao, Jiafei Wu, Zhe Liu, Chong Wang, Rongfei Fan, Qingming Li

AAAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Theoretical In this paper, we explore algorithms achieving optimal rates of DP optimization with heavy-tailed gradients. Our first method is a simple clipping approach... We then propose an iterative updating method... Our results match the minimax lower bound, indicating that the theoretical limit of stochastic convex optimization under DP is achievable.
Researcher Affiliation Academia 1School of Cyber Science and Technology, Sun Yat-sen University, Shenzhen, China 2 Ningbo University, Ningbo, China 3 Beijing Institute of Technology, Beijing, China 4 Zhejiang University, Hangzhou, China EMAIL, EMAIL, EMAIL, EMAIL
Pseudocode Yes Algorithm 1: Stochastic optimization; Algorithm 2: Simple clipping method for mean estimation; Algorithm 3: Iterative updating method for mean estimation
Open Source Code No The paper does not contain any explicit statement about releasing source code for the methodology described, nor does it provide a link to a code repository.
Open Datasets No The paper is theoretical and focuses on mathematical proofs and algorithm design for convex optimization problems with heavy-tailed data. It does not describe experiments using any specific dataset, nor does it provide access information for any open datasets.
Dataset Splits No The paper is theoretical and does not conduct experiments on specific datasets. Therefore, it does not describe any training/test/validation dataset splits.
Hardware Specification No The paper is theoretical, focusing on algorithm design and proofs. It does not describe any experimental evaluations that would require specific hardware, hence no hardware specifications are provided.
Software Dependencies No The paper is theoretical and does not discuss the implementation of its proposed algorithms. Therefore, no specific software dependencies with version numbers are mentioned.
Experiment Setup No The paper is theoretical and does not present any experimental evaluations. Consequently, there are no details regarding experimental setup, hyperparameters, or system-level training settings.