Efficient Source-free Unlearning via Energy-Guided Data Synthesis and Discrimination-Aware Multitask Optimization
Authors: Xiuyuan Wang, Chaochao Chen, Weiming Liu, Xinting Liao, Fan Wang, Xiaolin Zheng
ICML 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments on three benchmark datasets demonstrate that DSDA outperforms existing unlearning methods, validating its effectiveness and efficiency in source-free unlearning. |
| Researcher Affiliation | Academia | 1Zhejiang University, China. Correspondence to: Chaochao Chen <EMAIL>. |
| Pseudocode | Yes | The pseudocode are shown in Algorithm 1 |
| Open Source Code | No | The paper does not contain any explicit statements regarding the availability of source code, nor does it provide a link to a code repository. |
| Open Datasets | Yes | We conduct experiments on CIFAR-10, CIFAR-100 (Krizhevsky et al., 2009) and Pins Face Recognition (Hereis, 2024) datasets. |
| Dataset Splits | No | The paper mentions using CIFAR-10, CIFAR-100, and Pins Face Recognition datasets, which are standard benchmarks. However, it does not explicitly state the specific training/validation/test splits, percentages, or sample counts used for these datasets within the main text. |
| Hardware Specification | Yes | All experiments are conducted on two NVIDIA RTX 3090 GPUs and repeated three times with different random seeds. |
| Software Dependencies | No | We implement all experiments in Python 3.9 and use the Py Torch library (Paszke et al., 2019). |
| Experiment Setup | Yes | Both the original and retrained models are trained from scratch using a multi-step learning rate scheduler, which begins with a learning rate of 0.01, and optimized with the Adam optimizer (Kingma & Ba, 2014). For a fair comparison, the batch sizes of all methods are set to 256 in Res Net18 and 32 in Vi T. |