STEM-LTS: Integrating Semantic-Temporal Dynamics in LLM-driven Time Series Analysis
Authors: Zhe Zhao, Pengkun Wang, Haibin Wen, Shuang Wang, Liheng Yu, Yang Wang
AAAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Through extensive experiments on various real-world datasets, we demonstrate that STEM-LTS achieves significant improvements in prediction accuracy, robustness to noise, and interpretability. Our work not only advances LLM-based time series analysis but also offers new perspectives on handling complex temporal data. ... We rigorously evaluate STEM-LTS through extensive experiments on diverse real-world datasets, demonstrating its superiority in temporal dependency modeling and semantic pattern alignment. ... Table 1: Transfer learning of long-term forecasting results on time series benchmark datasets. ... Table 2: SMAPE results of EBITDA from TETS and GDELT. |
| Researcher Affiliation | Academia | 1University of Science and Technology of China, Hefei 230026, China 2Suzhou Institute for Advanced Research, University of Science and Technology of China, Suzhou 215123, China 3City University of Hong Kong 4The Hong Kong University of Science and Technology (Guangzhou) 5Key Laboratory of Precision and Intelligent Chemistry, USTC |
| Pseudocode | No | The paper describes the methodology using mathematical equations and descriptive text, such as equations (1) through (18) and sections like "Multi-scale Time Series Decomposition and Regularization", "Prompt-based Semantic Alignment", and "Unified Loss Function with Dynamic Weighting". There are no explicitly labeled pseudocode or algorithm blocks present in the paper. |
| Open Source Code | Yes | The code is available at https://github.com/DataLabatom/STEM-LTS. |
| Open Datasets | Yes | We conduct experiments on three complementary datasets: EBITDA for corporate financial metrics with long-term trends and cyclical patterns (Lai et al. 2018), GDELT for global event dynamics with complex temporal dependencies (Zhou et al. 2021), and standard benchmarks (ECL, Traffic, Weather, Ettm1/2, Etth1/2) representing diverse domains and temporal complexities (Wu et al. 2021; Zhou et al. 2021). |
| Dataset Splits | No | The paper mentions using specific datasets (EBITDA, GDELT, ECL, Traffic, Weather, Ettm1/2, Etth1/2) and prediction lengths (O {96, 192, 336, 720}) but does not explicitly state the train/validation/test splits (e.g., percentages or sample counts) for these datasets in the main text. |
| Hardware Specification | Yes | All experiments were conducted on a server equipped with two NVIDIA Tesla V100-PCIE-16GB GPUs. |
| Software Dependencies | No | STEM-LTS was implemented using the Py Torch deep learning framework (Paszke et al. 2019), with TEMPO as the backbone network. For each dataset, we first loaded the pre-trained TEMPO model parameters into STEM-LTS and then finetuned it using the Adam optimizer (Kingma and Ba 2015) and Cosine Annealing LR scheduler (Loshchilov and Hutter 2017) for 10 epochs. While PyTorch, Adam, and Cosine Annealing LR are mentioned, specific version numbers for these software components are not provided. |
| Experiment Setup | Yes | For each dataset, we first loaded the pre-trained TEMPO model parameters into STEM-LTS and then finetuned it using the Adam optimizer (Kingma and Ba 2015) and Cosine Annealing LR scheduler (Loshchilov and Hutter 2017) for 10 epochs. The initial learning rate was set to 0.001, with a maximum of 20 training iterations per epoch and a minimum learning rate of 1e-8. These hyperparameters were optimized through grid search and cross-validation. |