PDUDT: Provable Decentralized Unlearning under Dynamic Topologies

Authors: Jing Qiao, Yu Liu, Zengzhe Chen, Mingyi Li, Yuan Yuan, Xiao Zhang, Dongxiao Yu

ICML 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In this section, we evaluate PDUDT from many aspects, such as its statistical indistinguishability from the perturbed retraining algorithm, as well as its efficiency and effectiveness of unlearning. 5.1. Experimental Setup 5.2. Experimental Results
Researcher Affiliation Academia 1School of Computer Science and Technology, Shandong University, Qingdao, China 2Zhongtai Securities Institute for Financial Studies, Shandong University, Jinan, China 3School of Software, Shandong University, Jinan, China 4Joint SDUNTU Centre for Artificial Intelligence Research (C-FAIR), Shandong University, Jinan, China.
Pseudocode Yes Algorithm 1 The Perturbed Retraining Algorithm Algorithm 2 Decentralized Unlearning Algorithm PDUDT
Open Source Code No The paper does not provide an explicit statement regarding the release of source code for the described methodology, nor does it include a link to a code repository.
Open Datasets Yes According to the complexity of the learning tasks, we train the CNN model for MNIST (Lecun et al., 1998) and Fashion MNIST (Xiao et al., 2017) datasets, and the Res Net-18 model (He et al., 2016) for CIFAR-10 (Krizhevsky & Hinton, 2009) and SVHN (Netzer et al., 2011) datasets.
Dataset Splits Yes MNIST is a handwritten digit dataset containing 60, 000 training images and 10, 000 test images of grayscale digits (0 9), each with a resolution of 28 28 pixels. Fashion-MNIST consists of 60, 000 training images and 10,000 testing images... CIFAR-10 includes 60, 000 colored images... with 50, 000 images for training and 10, 000 for testing... SVHN consists... containing 73, 257 training images and 26, 032 test images...
Hardware Specification Yes Our experiments are conducted using Py Torch 2.5.1, Python 3.12, and Cuda 12.1. The experiments run on a cloud server equipped with an Intel(R) Xeon(R) Platinum 8358P CPU and 10 RTX 3090 GPUs, operating on Ubuntu 22.04.
Software Dependencies Yes Our experiments are conducted using Py Torch 2.5.1, Python 3.12, and Cuda 12.1.
Experiment Setup Yes In our experiments, we work with total n = 10 clients. Specifically, in each round t, whether there is a connection between any two clients is randomly generated... Each client trains with a batch size of 256 for 1 epoch per round, with a step size of 0.001. The unlearning request from client-n is set to occur at round t1 = 100. To save storage space, each client can apply an early stopping strategy by retaining its neighbors information only for the first 80 rounds. After performing the unlearning operations, the remaining n 1 clients continue training collaboratively for an additional 200 rounds.