LiD-FL: Towards List-Decodable Federated Learning
Authors: Hong Liu, Liren Shan, Han Bao, Ronghui You, Yuhao Yi, Jiancheng Lv
AAAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experimental results, including image classification tasks with both convex and nonconvex losses, demonstrate that the proposed algorithm can withstand the malicious majority under various attacks. |
| Researcher Affiliation | Academia | 1College of Computer Science, Sichuan University 2Toyota Technological Institute at Chicago 3School of Statistics and Data Science, Nankai University 4Institute of Clinical Pathology, West China Hospital, Sichuan University hong EMAIL, EMAIL, EMAIL, EMAIL, EMAIL, EMAIL |
| Pseudocode | Yes | Algorithm 1 shows the pseudocode of Li D-FL. Initialization: For the server, the global model list L0, the number of global iterations T; for each client cj: the number of local training rounds τ, batch size b, learning rate ℓ. |
| Open Source Code | Yes | The implementation is based on PRIOR4. 4https://github.com/BDe Mo/p Fed Bre D public |
| Open Datasets | Yes | We conduct experiments on two datasets, FEMNIST (Caldas et al. 2019) and CIFAR-10 (Krizhevsky 2009). |
| Dataset Splits | Yes | Furthermore, we split the local data on each client into a training set, a validation set and a test set with a ratio of 36 : 9 : 5. |
| Hardware Specification | No | No specific hardware details (GPU models, CPU models, etc.) are mentioned in the paper. |
| Software Dependencies | No | The paper mentions that 'The implementation is based on LEAF' and 'The implementation is based on PRIOR' but does not provide specific version numbers for any software, libraries, or frameworks used. |
| Experiment Setup | No | Detailed model architecture and hyperparameter settings are deferred to the appendix. |