FedCross: Intertemporal Federated Learning Under Evolutionary Games

Authors: Jianfeng Lu, Ying Zhang, Riheng Jia, Shuqin Cao, Jing Liu, Hao Fu

AAAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Finally, experimental results validate the theoretical soundness of Fed Cross and demonstrate its significant reduction in communication overhead. Numerical simulations were conducted to support our theoretical analysis and verify the validity of the proposed framework, demonstrating its practical effectiveness under various conditions.
Researcher Affiliation Academia 1School of Computer Science and Technology, Wuhan University of Science and Technology, China 2Key Laboratory of Social Computing and Cognitive Intelligence (Dalian University of Technology), Ministry of Education, China 3School of Computer Science and Technology, Zhejiang Normal University, China 4Hubei Province Key Laboratory of Intelligent Information Processing and Real-time Industrial System, Wuhan University of Science and Technology, China EMAIL, EMAIL, EMAIL
Pseudocode Yes Algorithm 1: Online Migrate Strategies For Cross Areas Algorithm 2: Optimized Base Station Selection with Payment Calculation
Open Source Code No The paper does not provide an explicit statement of code release or a link to a code repository for the methodology described.
Open Datasets No The paper mentions 'The simulation data is shown Table 1.' and describes simulation parameters such as 'Total number of Servers', 'Total number of Areas', 'Total number of users', 'Congestion coefficient', 'Reward', and 'Momentum'. However, it does not refer to a publicly available dataset used for training or evaluation in the context of machine learning, nor does it provide any concrete access information for such a dataset.
Dataset Splits No The paper does not describe any specific training/test/validation dataset splits. The 'simulation data' in Table 1 refers to simulation parameters, not a dataset with defined splits for model evaluation.
Hardware Specification No The paper does not provide specific hardware details (e.g., GPU/CPU models, memory specifications) used for running the experiments. It only mentions using PyTorch for implementation.
Software Dependencies No The paper mentions using 'Py Torch (Paszke et al. 2019)' but does not provide a specific version number for PyTorch or any other software dependencies.
Experiment Setup No The 'Experimental Setups' section and 'Table 1: Simulation Parameters' list parameters such as 'Total number of Servers', 'Total number of Areas', 'Total number of users', 'Congestion coefficient', 'Reward', and 'Momentum'. These are simulation parameters for the framework's environment rather than specific hyperparameters (e.g., learning rate, batch size, number of epochs) or system-level training settings for a machine learning model.