Amplifier: Bringing Attention to Neglected Low-Energy Components in Time Series Forecasting

Authors: Jingru Fei, Kun Yi, Wei Fan, Qi Zhang, Zhendong Niu

AAAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments on eight time series forecasting benchmarks consistently demonstrate our model s superiority in both effectiveness and efficiency compared to state-of-the-art methods. 5 Experiments
Researcher Affiliation Academia 1Beijing Institute of Technology 2State Information Center of China 3University of Oxford 4Tongji University EMAIL, EMAIL, EMAIL, EMAIL
Pseudocode No The paper describes the methodology using textual explanations and mathematical formulations, but it does not include any explicitly labeled pseudocode or algorithm blocks. Figure 2 is an architectural diagram, not pseudocode.
Open Source Code Yes Code https://github.com/aikunyi/Amplifier
Open Datasets Yes We conduct extensive experiments on eight popular datasets, including ETT datasets (Zhou et al. 2021), Electricity (Wu et al. 2021), Exchange (Lai et al. 2018), Traffic (Sen, Yu, and Dhillon 2019) and Weather (Wu et al. 2021).
Dataset Splits No The paper mentions lookback window size and prediction lengths, but it does not explicitly provide details about training, validation, and test dataset splits (e.g., percentages or specific methodologies). While Appendix C is mentioned for more dataset details, the provided text does not include it.
Hardware Specification Yes All experiments in this study were carried out using Py Torch on one single NVIDIA RTX 3070 GPU with 8GB.
Software Dependencies No All experiments in this study were carried out using Py Torch on one single NVIDIA RTX 3070 GPU with 8GB. The paper mentions PyTorch but does not specify its version number or any other software dependencies with version numbers.
Experiment Setup Yes We use Mean Squared Error (MSE) as the loss function and report the results using both MSE and Mean Absolute Error (MAE) as evaluation metrics. We set the lookback window size L as 96 and the prediction length as ω {96, 192, 336, 720}.