TRAIL: Trust-Aware Client Scheduling for Semi-Decentralized Federated Learning

Authors: Gangqiang Hu, Jianfeng Lu, Jianmin Han, Shuqin Cao, Jing Liu, Hao Fu

AAAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Finally, our experiments conducted on real-world datasets demonstrate that TRAIL outperforms state-of-the-art baselines, achieving an improvement of 8.7% in test accuracy and a reduction of 15.3% in training loss.
Researcher Affiliation Academia 1School of Computer Science and Technology, Zhejiang Normal University, China 2School of Computer Science and Technology, Wuhan University of Science and Technology, China 3Key Laboratory of Social Computing and Cognitive Intelligence (Dalian University of Technology), Ministry of Education, China 4Hubei Province Key Laboratory of Intelligent Information Processing and Real-time Industrial System (Wuhan University of Science and Technology), China
Pseudocode Yes Algorithm 1: Greedy Algorithm for Solving the Optimization Problem
Open Source Code No The paper does not explicitly state that source code for the described methodology is publicly available, nor does it provide a link to a code repository or mention code in supplementary materials.
Open Datasets Yes Four standard real-world datasets, e.g. MNIST (Rana, Kabir, and Sobur 2023), EMNIST (Majeed et al. 2024), SVHN (Pradhan et al. 2024),and CIFAR-10 (Aslam and Nassif 2023) are utilized to make performance evaluation.
Dataset Splits No The paper mentions that "each client assigned 1,000 local training samples" and that "10%, 30%, and 50% of the clients gradually experience degradation". However, it does not provide specific train/validation/test splits for the datasets (MNIST, EMNIST, CIFAR-10, SVHN) themselves that would be needed for reproduction.
Hardware Specification No The paper does not provide specific details about the hardware used for running its experiments, such as GPU models, CPU types, or memory specifications.
Software Dependencies No The paper mentions software components like "CNN architecture", "SGD with a momentum of 0.05", "ReLU as the activation function", and "cross-entropy loss" but does not provide specific version numbers for any libraries, frameworks, or programming languages used.
Experiment Setup Yes The batch size is 32, balancing computational efficiency and model performance. Each client performs 100 local training rounds (T1 = 100) before aggregation, with 100 inter-cluster aggregations (T2 = 100) to synchronize updates across edge servers. The learning rate (η) is set to 0.01, ensuring stable and efficient optimization, and SGD with a momentum of 0.05. The model uses ReLU as the activation function for non-linearity and cross-entropy loss for classification tasks.