Group-robust Machine Unlearning
Authors: Thomas De Min, Subhankar Roy, Stéphane Lathuilière, Elisa Ricci, Massimiliano Mancini
TMLR 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We conduct experiments on three datasets and show that MIU outperforms standard methods, achieving unlearning without compromising model robustness. Source code available at https://github.com/tdemin16/group-robust_machine_unlearning. Tables 1 to 3 show results for group-robust unlearning on Celeb A (Liu et al., 2015), Waterbirds (Sagawa et al., 2020), and Fair Face (Karkkainen & Joo, 2021) using an unlearning ratio r of 0.5. Section 4.5 shows a complete ablation study of MIU s components. |
| Researcher Affiliation | Academia | Thomas De Min University of Trento, Italy EMAIL Subhankar Roy University of Bergamo, Italy Stéphane Lathuilière Inria Grenoble, Univ. Grenoble Alpes, France Elisa Ricci University of Trento, Italy Fondazione Bruno Kessler, Italy Massimiliano Mancini University of Trento, Italy |
| Pseudocode | Yes | Algorithm 1 Py Torch-like MIU pseudocode. def MIU(model, mine, mine_original, optim, train_dataloader, remaining_dataloader, forget_dataloader): |
| Open Source Code | Yes | Source code available at https://github.com/tdemin16/group-robust_machine_unlearning. |
| Open Datasets | Yes | We conduct experiments on three datasets and show that MIU outperforms standard methods, achieving unlearning without compromising model robustness... Celeb A (Liu et al., 2015), Waterbirds (Sagawa et al., 2020), and Fair Face (Karkkainen & Joo, 2021)... We obtain Pretrain and Retrain by fine-tuning via empirical-risk minimization a Res Net-18 (He et al., 2016) pre-trained on Image Net (Russakovsky et al., 2015)... |
| Dataset Splits | No | The unlearning ratio is defined as the proportion of samples from that particular group that have been unlearned. After unlearning, the model must have unlearned the forget data and maintained its original robustness... We note that all methods use the same dataset splits; therefore, they must unlearn the same forget set. The paper specifies an 'unlearning ratio' for creating the forget set but does not explicitly detail the percentages or absolute counts for the overall training, validation, and test splits of the datasets used. |
| Hardware Specification | Yes | All experiments ran on a single A100 Nvidia GPU, using Py Torch (Paszke et al., 2019). |
| Software Dependencies | No | All experiments ran on a single A100 Nvidia GPU, using Py Torch (Paszke et al., 2019). The paper mentions PyTorch but does not provide a specific version number. |
| Experiment Setup | Yes | We obtain Pretrain and Retrain by fine-tuning via empirical-risk minimization a Res Net-18 (He et al., 2016) pre-trained on Image Net (Russakovsky et al., 2015) for 30 epochs, using SGD with 0.9 momentum and weight decay. The learning rate is decayed with a cosine annealing scheduler for the entire training. We additionally warm-up the learning rate for the first two epochs using a linear scheduler. We apply standard data augmentation techniques, namely, random resized crop, random horizontal flip, and input normalization (He et al., 2016). We limited fine-tuning to 10 epochs for approximate unlearning methods, searching for the optimal configuration for the other hyperparameters. The λ parameter of MIU is set between 1 and 10 (see Appx. B.5). |