ConFREE: Conflict-free Client Update Aggregation for Personalized Federated Learning
Authors: Hao Zheng, Zhigang Hu, Liu Yang, Meiguang Zheng, Aikun Xu, Boyu Wang
AAAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments demonstrate significantly enhances existing p FL algorithms when integrated with Con FREE. Extensive experimental results show that Con FREE can be seamlessly integrated with a range of personalization techniques to improve the accuracy of SOTA models. In this section, we conduct extensive experiments to evaluate our method Con FREE for personalized federated learning and demonstrate its effectiveness, efficiency, and versatility. Implementation Detail Datasets and Non-IID. We evaluated the effectiveness of Con FREE on four public datasets: CIFAR10 and CIFAR100(Krizhevsky et al. 2009), Tiny-Image Net (Chrabaszcz, Loshchilov, and Hutter 2017) and Flowers102 (Nilsback and Zisserman 2008). |
| Researcher Affiliation | Academia | Hao Zheng1,2, Zhigang Hu1, Liu Yang1*, Meiguang Zheng1, Aikun Xu1, Boyu Wang2* 1School of Computer Science and Engineering, Central South University, Changsha, China 2Department of Computer Science, University of Western Ontario, London, Canada |
| Pseudocode | Yes | Algorithm 1: Con FREE |
| Open Source Code | No | The paper does not explicitly provide a link to source code, nor does it state that the code will be made publicly available. It only mentions that methods are reproduced without providing the authors' own implementation code. |
| Open Datasets | Yes | We evaluated the effectiveness of Con FREE on four public datasets: CIFAR10 and CIFAR100(Krizhevsky et al. 2009), Tiny-Image Net (Chrabaszcz, Loshchilov, and Hutter 2017) and Flowers102 (Nilsback and Zisserman 2008). |
| Dataset Splits | Yes | All data is split into 70% training and 30% testing sets. |
| Hardware Specification | Yes | We run all experiments on a workstation equipped with two Intel Xeon Gold 6248R CPUs (48 cores), 128GB of memory and two NVIDIA 3090 Ti GPUs. |
| Software Dependencies | No | The paper mentions "Py Torch" and "Scipy" but does not specify any version numbers for these software dependencies, nor for any other key software components. |
| Experiment Setup | Yes | The local learning rate is set to 0.005, the batch size to 10, and the number of local training epochs to 1. The global communication rounds are set to 500 across all datasets until all methods empirically converge. The default number of clients is 20, with a 100% client participation rate. |