MultiSFL: Towards Accurate Split Federated Learning via Multi-Model Aggregation and Knowledge Replay

Authors: Zeke Xia, Ming Hu, Dengke Yan, Ruixuan Liu, Anran Li, Xiaofei Xie, Mingsong Chen

AAAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experimental results obtained from various non-IID and IID scenarios demonstrate that Multi SFL significantly outperforms conventional SFL methods by up to a 23.25% test accuracy improvement.
Researcher Affiliation Academia 1Mo E Engineering Research Center of SW/HW Co-Design Tech. and App., East China Normal University, China 2School of Computing and Information Systems, Singapore Management University, Singapore 3Department of Biomedical Informatics & Data Science, School of Medicine, Yale University
Pseudocode Yes Algorithm 1 details the implementation of our Multi SFL approach.
Open Source Code No The paper does not contain any explicit statement regarding the release of source code for the methodology described, nor does it provide a direct link to a code repository.
Open Datasets Yes We compared Multi SFL with all baselines on four well-known datasets, i.e., CIFAR-10, CIFAR-100 (Krizhevsky 2009), FEMNIST (Caldas et al. 2018), and Tiny Image Net (Deng et al. 2009).
Dataset Splits No The paper describes how non-IID distributions were created using the Dirichlet distribution for CIFAR-10, CIFAR-100, and Tiny-Image Net, and mentions that FEMNIST is naturally non-IID. It also specifies client selection for federated learning rounds (10% of 100 clients). However, it does not provide specific train/validation/test dataset splits (e.g., percentages or sample counts) for reproducibility.
Hardware Specification Yes All experimental results were obtained from an Ubuntu workstation equipped with an Intel i9 CPU, 64GB of memory, and an NVIDIA RTX 4090 GPU.
Software Dependencies No The paper states, "We implemented Multi SFL using the Py Torch framework (Paszke et al. 2019)", but it does not provide a specific version number for PyTorch or any other software dependency used in their implementation.
Experiment Setup Yes For all methods, we adopted an SGD optimizer with a fixed learning rate of 0.01 and a momentum of 0.5 and set the batch size to 50. For our method, we set α to 0.1 and γ to 0.5.