Zero-shot Federated Unlearning via Transforming from Data-Dependent to Personalized Model-Centric

Authors: Wenhan Wu, Huanghuang Liang, Jingling Yuan, Jiawei Jiang, Kanye Ye Wang, Chuang Hu, Xiaobo Zhou, Dazhao Cheng

IJCAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Evaluations demonstrate its effectiveness in unlearning under non-IID settings. [...] We deploy and evaluate Zero FU on real-world datasets, achieving an accuracy improvement of up to 26.2% compared to existing zero-shot machine unlearning methods extended to federated scenarios. [...] Section 4 Evaluation. [...] Table 1 compares Zero FU’s accuracy with baseline methods on retained data Dr and forgotten data Df. [...] We conducted MIAs on Df in non-IID scenarios. As shown in Figure 4, Zero FU achieves MIA accuracy and recall closer to retrained models, demonstrating better privacy protection. [...] We conducted visualization on MNIST and CIFAR10 with ζ = 0.10, using t-SNE [Van der Maaten and Hinton, 2008] to map personalized feature f p i of retrained and unlearned models. [...] We first conducted ablation experiments using two variants: (a) w/o LCE i , removing the class embedding loss, and (b) w/o FLoss. Results on CIFAR10 are shown in Figure 6a: [...] For scalability, we tested Zero FU with 5/10/15/20/50 clients in non-IID settings (Figure 6).
Researcher Affiliation Academia 1 School of Computer Science, Wuhan University, Wuhan, China 2 Hubei Key Laboratory of Transportation Internet of Things, School of Computer Science and Artificial Intelligence, Wuhan University of Technology, Wuhan, China 3 State Key Laboratory of Internet of Things for Smart City, University of Macau, Macau SAR EMAIL, EMAIL, EMAIL
Pseudocode Yes Algorithm 1: Model Personalization Based Training. [...] Algorithm 2: Zero FU Zero-shot Unlearning Process.
Open Source Code No The paper does not provide an explicit statement about releasing source code or a link to a code repository.
Open Datasets Yes Datasets include: MNIST [Le Cun et al., 1998], SVHN [Netzer et al., 2011], Fashion-MNIST [Xiao et al., 2017], and CIFAR10 [Krizhevsky and Hinton, 2009].
Dataset Splits No The paper describes how data is distributed among clients for federated learning simulation (e.g., label shift heterogeneity using Dirichlet distribution), but it does not explicitly state the overall training, validation, and test dataset splits for the models themselves, relying on the implicit understanding of standard splits for the mentioned public datasets.
Hardware Specification Yes We deployed Zero FU on NVIDIA A100 40GB Tensor Core GPUs using Py Torch 2.3.1 and Python 3.8.
Software Dependencies Yes We deployed Zero FU on NVIDIA A100 40GB Tensor Core GPUs using Py Torch 2.3.1 and Python 3.8.
Experiment Setup Yes The learning rate was η = 0.005 with Tl = 3, and regularization parameters λ1 = λ2 = 0.1. During FU, τ = 2, Tk = 9, and the loss hyperparameters were β = 5.0, γ = 2.0.