Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1]

Momentum-Driven Adaptivity: Towards Tuning-Free Asynchronous Federated Learning

Authors: Wenjing Yan, Xiangyu Zhong, Xiaolu Wang, Ying-Jun Angela Zhang

ICML 2025 | Venue PDF | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We conduct comprehensive empirical evaluations on deep learning tasks using real-world datasets. The numerical results demonstrate that Ada Mas FL consistently outperforms state-of-the-art AFL methods in runtime efficiency and exhibits exceptional robustness across diverse learning rate configurations and system conditions.
Researcher Affiliation Academia 1Department of Information Engineering, The Chinese University of Hong Kong, Hong Kong SAR; 2Software Engineering Institute, East China Normal University, China. Correspondence to: Xiangyu Zhong <EMAIL>, Xiaolu Wang <EMAIL>.
Pseudocode Yes Algorithm 1 Mas FL: Procedures at Central Server Algorithm 2 Mas FL: Procedures at Client i Algorithm 3 Ada Mas FL: Procedures at Central Server Algorithm 4 Ada Mas FL: Procedures at Client i
Open Source Code No The paper does not contain an explicit statement about the release of source code or a link to a code repository.
Open Datasets Yes We evaluate the performance of our algorithms on the image classification task using two real-world datasets: CIFAR-10 (Li et al., 2017) and FMNIST (Xiao et al., 2017).
Dataset Splits No The paper describes how data is distributed across clients (i.i.d. or non-i.i.d. with Dirichlet distribution Dir(α=0.5)) but does not explicitly state the training/test/validation splits for the CIFAR-10 or FMNIST datasets themselves. It implicitly uses test accuracy but doesn't specify the split ratios used.
Hardware Specification No The paper does not provide specific hardware details (e.g., GPU models, CPU types, memory amounts) used for running the experiments. It only describes the simulation setup for asynchronous conditions.
Software Dependencies No The paper mentions 'stochastic gradient descent (SGD) as the optimization algorithm' and 'convolutional neural network (CNN)' or 'Res Net-18 architecture'. However, it does not specify any software names with version numbers (e.g., Python, PyTorch, TensorFlow, CUDA versions) that would be needed to replicate the experiment.
Experiment Setup Yes Our experiments consider a federated learning system with N = 100 clients cooperating to train a shared model. At each global communication round, a fraction of 0.1 clients are randomly selected to participate, resulting in S = 10 active clients per round. Each selected client trains locally for 2 epochs with a batch size of 100 using stochastic gradient descent (SGD) as the optimization algorithm. The concurrency level is set to Mc = 20, meaning that the server can aggregate results from up to 20 clients concurrently. The delay time for each client is sampled from a uniform distribution U(0, Tmax), where Tmax = 20 seconds by default. We run all experiments for a total of T = 600 global communication rounds. For FMNIST, we utilize a convolutional neural network (CNN) consisting of three convolutional layers and two fully connected layers. For CIFAR-10, we adopt a Res Net-18 architecture (He et al., 2016).