FedSSI: Rehearsal-Free Continual Federated Learning with Synergistic Synaptic Intelligence

Authors: Yichen Li, Yuying Wang, Haozhao Wang, Yining Qi, Tianzhe Xiao, Ruixuan Li

ICML 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments show that Fed SSI achieves superior performance compared to stateof-the-art methods. We conduct extensive experiments on various datasets and different CFL task scenarios. Experimental results show that our proposed model outperforms state-of-the-art methods by up to 12.47% in terms of final accuracy on different tasks. 5. Experiments
Researcher Affiliation Academia 1School of Computer Science and Technology, Huazhong University of Science and Technology, Wuhan, China 2School of Computer Science and Technology, Soochow University, Suzhou, China. Correspondence to: Ruixuan Li <EMAIL>.
Pseudocode Yes Algorithm 1 Fed SSI Input: T: communication round; K: client number; η: learning rate; {T t}n t=1: distributed dataset with n tasks; w: parameter of the model; vt k: personalized surrogate model in client k for the t-th task; sl k,i: contribution of the i-th parameter in client k with t-th task. Output: w1, w2, . . . , wk: target classification model.
Open Source Code No The paper does not contain any explicit statement about releasing source code, nor does it provide a link to a code repository or mention code in supplementary materials.
Open Datasets Yes Datasets. We conduct our experiments with heterogeneously partitioned datasets across two federated incremental learning scenarios using six datasets: (1) Class Incremental Learning: CIFAR10 (Krizhevsky et al., 2009), CIFAR100 (Krizhevsky et al., 2009), and Tiny-Image Net (Le & Yang, 2015); (2) Domain-Incremental Learning: Digit10, Office31 (Saenko et al., 2010), and Office-Caltech10 (Zhang & Davison, 2020).
Dataset Splits Yes CIFAR10: A dataset with 10 object classes... It consists of 50,000 training images and 10,000 test images. CIFAR100: Similar to CIFAR10, but with 100 fine-grained object classes. It has 50,000 training images and 10,000 test images. Tiny-Image Net: A subset of the Image Net dataset with 200 object classes. It contains 100,000 training images, 10,000 validation images, and 10,000 test images. MNIST: A dataset of handwritten digits with a training set of 60,000 examples and a test set of 10,000 examples.
Hardware Specification Yes All experiments are run on 8 RTX 4090 GPUs and 16 RTX 3090 GPUs.
Software Dependencies No The paper does not explicitly state any specific software dependencies with version numbers (e.g., Python, PyTorch, TensorFlow, or CUDA versions).
Experiment Setup Yes Table 5. Experimental Details. Settings of different datasets in the experiments section. Attributes CIFAR10 CIFAR100 Tiny-Image Net Digit10 Office31 Office-Caltech-10 Task size 178MB 178MB 435MB 480M 88M 58M Image number 60K 60K 120K 110K 4.6k 2.5k Image Size 3 32 32 3 32 32 3 64 64 1 28 28 3 300 300 3 300 300 Task number n = 5 n = 10 n = 10 n = 4 n = 3 n = 4 Task Scenario Class-IL Class-IL Class-IL Domain-IL Domain-IL Domain-IL Batch Size s = 64 s = 64 s =128 s = 64 s = 32 s = 32 ACC metrics Top-1 Top-1 Top-10 Top-1 Top-1 Top-1 Learning Rate l = 0.01 l = 0.01 l = 0.001 l = 0.001 l = 0.01 l = 0.01 Data heterogeneity α = 0.1 α = 1.0 α = 10.0 α = 0.1 α = 1.0 α = 1.0 Client numbers C = 20 C=20 C=20 C=15 C=10 C=8 Local training epoch E = 20 E = 20 E = 20 E = 20 E = 20 E = 15 Client selection ratio k = 0.4 k = 0.5 k = 0.6 k = 0.4 k = 0.4 k = 0.5 Communication Round T = 80 T = 100 T = 100 T = 60 T = 60 T = 40