DualGFL: Federated Learning with a Dual-Level Coalition-Auction Game
Authors: Xiaobing Chen, Xiangwei Zhou, Songyang Zhang, Mingxuan Sun
AAAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments on real-world datasets demonstrate Dual GFL s effectiveness in improving both server utility and client utility. ... Experiments Datasets and Predictive Model: We use the FMNIST (Xiao, Rasul, and Vollgraf 2017), EMNIST (Cohen et al. 2017), and CIFAR10 (Krizhevsky and Hinton 2009) datasets for image classification tasks. ... Experiment Results Dual GFL shows superior performance in server utility, including total score, average client quality, and average coalition quality. As shown in Table 1, Dual GFL achieves improvements of at least 2.05 times in total score... Evaluating Training Dynamics Dual GFL significantly improves server and client utility, as shown by key metrics obtained at the end of the training in Table 1. ... Ablation Study: Increasing the maximum coalition size |S|max affects the performance of Dual GFL. We conduct an ablation study on |S|max in FMNIST (0.1) setting. |
| Researcher Affiliation | Academia | Xiaobing Chen1, Xiangwei Zhou1, Songyang Zhang2, Mingxuan Sun3 1Division of Electrical and Computer Engineering, Louisiana State University 2Department of Electrical and Computer Engineering, University of Louisiana at Lafayette 3Division of Computer Science and Engineering, Louisiana State University EMAIL, EMAIL, EMAIL |
| Pseudocode | Yes | Algorithm 1: Pareto-Optimal Partitioning (POP) Input: Preference profiles R = {Ri : i N} Output: Pareto-optimal partition π 1 Initialize R = R; 2 Initialize R by replacing all in R with ; 3 π = Perfect Partition(N, R ); 4 for i N do 5 while R i = R i do 6 R i = Refine(R i , R i ); 7 R = (R 1 , ..., R i 1, R i, R i+1, ..., R n ); 8 π = Perfect Partition(N, R ); 9 if π = then 11 R i = R i; 13 R i = R i ; |
| Open Source Code | No | The paper does not explicitly state that source code is provided nor does it include a link to a code repository. |
| Open Datasets | Yes | Datasets and Predictive Model: We use the FMNIST (Xiao, Rasul, and Vollgraf 2017), EMNIST (Cohen et al. 2017), and CIFAR10 (Krizhevsky and Hinton 2009) datasets for image classification tasks. |
| Dataset Splits | No | The paper mentions "Dirichlet data partitioning (Panchal et al. 2023) to partition original datasets into clients private datasets" and "set multiple data configurations: FMNIST (0.1), FMNIST (0.6), EMNIST (0.1), and CIFAR10 (0.1), where values in parenthesis denote the Dirichlet parameters." While this describes how data is distributed among clients for federated learning, it does not specify the train/validation/test splits of the overall dataset or of individual client datasets. |
| Hardware Specification | No | The paper mentions "CPU architecture" when defining computation cost but does not specify the hardware (e.g., GPU/CPU models, memory) used for conducting its experiments. |
| Software Dependencies | No | The paper mentions using the "SGD optimizer" but does not specify any software names with version numbers (e.g., Python, PyTorch, TensorFlow, CUDA versions) that would be needed for replication. |
| Experiment Setup | Yes | Each experiment is conducted in T = 250 rounds, and clients update the model for I = 3 epochs using the SGD optimizer with a learning rate of 0.01 and momentum of 0.9. The batch size is set to 32. |