Federated Foundation Models on Heterogeneous Time Series

Authors: Shengchao Chen, Guodong Long, Jing Jiang, Chengqi Zhang

AAAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments on benchmark datasets demonstrate the effectiveness of the proposed federated learning approach. The newly learned time series foundation models achieve superior generalization capabilities on cross-domain time series analysis tasks. ... The main results of pre-training are shown in Table 1. ... We conducted additional ablation experiments to analyze the impact of specific components.
Researcher Affiliation Academia 1Australian Artificial Intelligence Institute, University of Technology Sydney 2Department of Data Science and Artificial Intelligence, The Hong Kong Polytechnic University EMAIL, EMAIL EMAIL
Pseudocode No The paper describes the model architecture and optimization strategies using mathematical equations and descriptive text, but it does not include any explicitly labeled pseudocode or algorithm blocks.
Open Source Code Yes Code https://github.com/shengchaochen82/FFTS
Open Datasets Yes Extensive experiments on benchmark datasets demonstrate the effectiveness of the proposed federated learning approach. ... Specifically, we empirically selected 18 time series datasets (see Appendix for details) from different domains/sources to complete this process ... To evaluate performance, we adopt popular benchmarks and experimental setting following (Jin et al. 2023), including ETT (ETTh1, ETTh2, ETTm1, ETTm2), Weather, and Illness datasets ... We conduct experiment on five popular real-world datasets, including ETT (ETTh1, ETTh2, ETTm1, ETTm2), and Weather ... We benchmark FFTS against five widely utilized datasets: SMD, MSL, SMAP, Swa T, and PSM
Dataset Splits No The paper mentions few-shot scenarios using 5% or 10% of data for fine-tuning and zero-shot scenarios, and input length (look-back window) of 512, but it does not explicitly provide the specific training, validation, and test splits for the main experiments, instead referring to 'popular benchmarks and experimental setting following (Jin et al. 2023)'.
Hardware Specification No The paper does not provide specific hardware details (e.g., GPU models, CPU types, or memory specifications) used for running the experiments.
Software Dependencies No The paper does not specify any software names with version numbers (e.g., programming languages, libraries, or frameworks) used for implementing the experiments.
Experiment Setup Yes We adopted a uniform pre-training setting of L = 512 for each client, and Lm {8, 16, 24} and rm {15%, 25%, 50%} and k = 3 for evaluate its performance across different setting. ... During the evaluation phase, we used a uniform configuration with [Lm = 16, rm = 35%]. ... (b) Sensitivity of k. ... (d) Impact of λ.