Certified Machine Unlearning via Noisy Stochastic Gradient Descent

Authors: Eli Chien, Haoyu Wang, Ziang Chen, Pan Li

NeurIPS 2024 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments show that our approach achieves a similar utility under the same privacy constraint while using 2% and 10% of the gradient computations compared with the state-of-the-art gradient-based approximate unlearning methods for mini-batch and full-batch settings, respectively.
Researcher Affiliation Academia Eli Chien Department of Electrical and Computer Engineering Georgia Institute of Technology Georgia, U.S.A. EMAIL Haoyu Wang Department of Electrical and Computer Engineering Georgia Institute of Technology Georgia, U.S.A. EMAIL Ziang Chen Department of Mathematics Massachusetts Institute of Technology Massachusetts, U.S.A. EMAIL Pan Li Department of Electrical and Computer Engineering Georgia Institute of Technology Georgia, U.S.A. EMAIL
Pseudocode Yes Algorithm 1 (Un)learning with PNSGD
Open Source Code Yes Our code is publicly available3. https://github.com/Graph-COM/SGD_unlearning
Open Datasets Yes We conduct experiments on MNIST [25] and CIFAR10 [26], which contain 11,982 and 10,000 training instances respectively.
Dataset Splits No The paper does not explicitly specify a separate validation dataset split or how it was used to tune hyperparameters or monitor training progress.
Hardware Specification Yes The codes run on a server with a single NVIDIA RTX 6000 GPU with AMD EPYC 7763 64-Core Processor.
Software Dependencies Yes All the experiments run with Py Torch=2.1.2 [36] and numpy=1.24.3 [37].
Experiment Setup Yes We set the learning iteration T = 10, 20, 50, 1000 to ensure PNSGD converges for mini-batch size b = 32, 128, 512, n respectively. All results are averaged over 100 independent trials with standard deviation reported as shades in figures. We set the step size η for the PNSGD unlearning framework across all the experiments as 1/L.