Zero-Shot Machine Unlearning with Proxy Adversarial Data Generation

Authors: Huiqiang Chen, Tianqing Zhu, Xin Yu, Wanlei Zhou

IJCAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental 4 Experiment 4.1 Experimental Setup Dataset and model architecture. Following previous works, we evaluate the proposed method on four benchmarks: Facescrub [Ng and Winkler, 2014], SVHN [Netzer et al., 2011], CIFAR-10 and CIFAR-100 [Krizhevsky et al., 2009]. We apply four representative network architectures in our experiments: Alex Net [Krizhevsky et al., 2012], VGG [Simonyan and Zisserman, 2014], Res Net [He et al., 2015], and Vi T [Dosovitskiy et al., 2020]. Baselines. We compare our approach with several baselines.
Researcher Affiliation Academia Huiqiang Chen1,2 , Tianqing Zhu1 , Xin Yu3 , Wanlei Zhou1 1City University of Macau, Macau, China 2University of Technology Sydney, NSW, Australia 3University of Queensland, QLD, Australia EMAIL, EMAIL, EMAIL
Pseudocode No The paper describes the proposed method in Section 3, detailing the steps for proxy adversarial data generation, unlearning with orthogonal projection, and influence-based pseudo-label optimization using mathematical formulations and descriptive text, but it does not include any explicit pseudocode or algorithm blocks.
Open Source Code No The paper does not contain any explicit statements about releasing source code, a link to a code repository, or information indicating code availability in supplementary materials.
Open Datasets Yes Dataset and model architecture. Following previous works, we evaluate the proposed method on four benchmarks: Facescrub [Ng and Winkler, 2014], SVHN [Netzer et al., 2011], CIFAR-10 and CIFAR-100 [Krizhevsky et al., 2009].
Dataset Splits Yes Dataset and model architecture. Following previous works, we evaluate the proposed method on four benchmarks: Facescrub [Ng and Winkler, 2014], SVHN [Netzer et al., 2011], CIFAR-10 and CIFAR-100 [Krizhevsky et al., 2009]. ... Evaluation metrics and implement details. Following the literature, we assess the unlearned model with three metrics: 1) Accut: Accuracy on the testing set of unlearning classes. ... 3) Accrt: Accuracy on testing set of remaining classes.
Hardware Specification Yes Computational cost of ZS-PAG. We conduct the experiment on an NVIDIA RTX 4090 GPU.
Software Dependencies No The paper does not explicitly list any specific software dependencies with version numbers, such as programming languages, libraries, or frameworks.
Experiment Setup Yes We utilize projected gradient descent [Madry et al., 2017] to generate adversary samples Dadv in the experiment. ... We fix the unlearning epochs to 10 for a fair comparison. ... As shown in the experiment, setting nadv = 100 is sufficient for our needs. ... We generate adversarial samples with varying noise bound ε.