Federated Oriented Learning: A Practical One-Shot Personalized Federated Learning Framework

Authors: Guan Huang, Tao Shu

ICML 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments on datasets Wildfire, Hurricane, CIFAR-10, CIFAR-100, and SVHN demonstrate that FOL consistently outperforms state-of-the-art one-shot Federated Learning (OFL) methods; for example, it achieves accuracy improvements of up to 39.24% over the baselines on the Wildfire dataset. Our experiment results verify that FOL consistently outperform counterparts, achieving accuracy improvements of up to 39.24% on Wildfire dataset.
Researcher Affiliation Academia 1Department of CSSE, Auburn University, Auburn, AL, 36849, USA. Correspondence to: Tao Shu <EMAIL>.
Pseudocode Yes Algorithm 1 Federated Oriented Learning (FOL) Algorithm 2 Top-K Model Selection
Open Source Code No The paper does not contain an explicit statement about releasing source code or a link to a code repository for the described methodology.
Open Datasets Yes Datasets. We evaluate FOL s performance using four diverse datasets: Wildfire (Aaba, 2023), Hurricane (Park, 2021), CIFAR-10 (Krizhevsky, 2009), CIFAR-100 (Krizhevsky, 2009), and SVHN (Netzer et al., 2011).
Dataset Splits Yes Following the partitioning process, each client splits its local dataset into training, validation, and testing subsets in proportions of 70%, 15%, and 15%, respectively.
Hardware Specification Yes All models are built in Py Torch and trained/tested on two Ge Force RTX 4090 GPUs.
Software Dependencies No The paper mentions "All models are built in Py Torch" but does not specify a version number for PyTorch or any other software dependencies with version numbers.
Experiment Setup Yes For the Wildfire and Hurricane datasets, we use Stochastic Gradient Descent (SGD) with a momentum of 0.9, a weight decay of 0.001, a learning rate of 0.001, a batch size of 32, a patience of 20, and local training for 200 epochs. For CIFAR-10, CIFAR-100, and SVHN, we use SGD with a momentum of 0.9, a weight decay of 0.001, a learning rate of 0.01, a batch size of 128, a patience of 20, and local training for 300 epochs. In our experiments on CIFAR-10, CIFAR-100, SVHN, and the satellite datasets (Wildfire and Hurricane), we set λp = 0.1, γshared = 0.05, γunshared = 0.02 in Equation (5), and we set the distillation regularization weight in Equation (12) to 0.01.