FedECADO: A Dynamical System Model of Federated Learning

Authors: Aayushya Agarwal, Gauri Joshi, Lawrence Pileggi

ICML 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We evaluate Fed ECADO s performance by training multiple models distributed across multiple clients. The Fed ECADO workflow is shown in Algorithm 2 in Appendix C. We benchmark our approach against established federated learning methods designed for heterogeneous computation, including Fed Prox (Li et al., 2020), Fed Nova (Wang et al., 2020), Fed Exp (Jhunjhunwala et al., 2023), Fed Decorr (Shi et al., 2022) and Fed RS (Li and Zhan, 2021). Our experiments focus on two key challenges: non-IID data distribution and asynchronous client training.
Researcher Affiliation Academia 1Department of Electrical and Computer Engineering, Carnegie Mellon University, Pittsburgh, USA. Correspondence to: Aayushya Agarwal <EMAIL>.
Pseudocode Yes The Fed ECADO workflow is shown in Algorithm 2 in Appendix C. We benchmark our approach against established federated learning methods designed for heterogeneous computation, including Fed Prox (Li et al., 2020), Fed Nova (Wang et al., 2020), Fed Exp (Jhunjhunwala et al., 2023), Fed Decorr (Shi et al., 2022) and Fed RS (Li and Zhan, 2021). Algorithm 1 Adaptive Time Stepping Method Algorithm 2 Fed ECADO Central Update
Open Source Code No The paper does not contain any explicit statement about releasing source code, nor does it provide any links to a code repository.
Open Datasets Yes We evaluate Fed ECADO s performance by training a VGG model (Simonyan and Zisserman, 2014) on the non-IID CIFAR-10 (Krizhevsky et al., 2009) dataset distributed across 100 clients. We train the VGG11 model (Simonyan and Zisserman, 2014) on a CIFAR-10 dataset (Krizhevsky et al., 2009)... We evaluate Fed ECADO on larger Res Net34 model trained on CIFAR-100 dataset... Scaling to Other Datasets and Models: We demonstrate Fed ECADO s ability to scale across additional datasets, including Sentiment140 and Tiny Image Net
Dataset Splits No The paper describes how data is distributed across clients (e.g., 'non-IID Dirichlet distribution') and the number of clients, but it does not explicitly provide details about training, validation, or test splits for the overall datasets used (e.g., CIFAR-10, CIFAR-100) in terms of percentages or sample counts.
Hardware Specification No The paper does not provide specific hardware details such as GPU models, CPU types, or memory used for running experiments.
Software Dependencies No The paper does not list specific software dependencies with version numbers (e.g., Python, PyTorch, TensorFlow versions).
Experiment Setup Yes To model realistic scenarios, we set an active participation ratio of 0.1, meaning only 10 clients actively participate in each communication round. The data distribution adheres to a non-IID Dirichlet distribution (Dir16(0.1))... Using each method, we train for 100 epochs... each client exhibits a different learning rate, lri, and number of epochs, ei, whose values are sampled by a uniform distribution: lri U[10 4, 10 3] ei U[1, 10]. ...training Res Net34 model on CIFAR-100 dataset ... for 200 epochs... training a Res Net-18 on Tiny Image Net dataset ... for 60 epochs... training a LSTM model ... on Sentiment140 dataset ... for 10 epochs.