FreDF: Learning to Forecast in the Frequency Domain
Authors: Hao Wang, Lichen Pan, Yuan Shen, Zhichao Chen, Degui Yang, Yifei Yang, Sen Zhang, Xinggao Liu, Haoxuan Li, Dacheng Tao
ICLR 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | To demonstrate the efficacy of Fre DF, there are six aspects empirically investigated: 1. Performance: Does Fre DF work? Section 4.2 compares Fre DF with state-of-the-art baselines using public datasets. The long-term forecasting task is investigated in Section 4.2 and the short-term forecasting and imputation tasks are explored in Appendix E.1. 2. Mechanism: How does it work? Section 4.3 offers an ablative study to dissect the contributions of Fre DF s individual components, elucidating their roles in enhancing forecasting accuracy. ... Table 1: Long-term forecasting performance. |
| Researcher Affiliation | Collaboration | 1Department of Control Science and Engineering, Zhejiang University 2School of Automation, Central South University 3Department of Computer Science and Engineering, Shanghai Jiao Tong University 4Trust and Safety Team, Tik Tok Sydney, Byte Dance Inc. 5Center for Data Science, Peking University 6Generative AI Lab, College of Computing and Data Science, Nanyang Technological University |
| Pseudocode | No | The paper describes methods and a DML approach in Appendix A.2, but does not present any formal pseudocode or algorithm blocks with structured code-like formatting. For example, the DML implementation steps are described as 'Orthogonalization. This step involves...' and 'Regression. This step involves...' |
| Open Source Code | Yes | Code is available at https://github.com/Master-PLC/Fre DF. |
| Open Datasets | Yes | The datasets for long-term forecast and imputation include ETT (4 subsets), ECL, Traffic, Weather and PEMS (Liu et al., 2024). The dataset for short-term forecast is M4 following Wu et al. (2023). Each dataset is divided chronologically for training, validation and test. Detailed dataset descriptions are provided in Appendix D.1. |
| Dataset Splits | Yes | Each dataset is divided chronologically for training, validation and test. Detailed dataset descriptions are provided in Appendix D.1. ... The datasets are chronologically divided into training, validation, and test sets following the protocols outlined in (Qiu et al., 2024; Liu et al., 2024). ... Table 4: Dataset description. Train / validation / test |
| Hardware Specification | Yes | Experiments are conducted on Intel(R) Xeon(R) Platinum 8383C CPUs and NVIDIA RTX 3090 GPUs. |
| Software Dependencies | No | The paper mentions that "Models are trained using the Adam optimizer (Kingma & Ba, 2015)" and refers to the "i Transformer repository (Liu et al., 2024)" for reproducing baselines. However, it does not specify version numbers for any programming languages, libraries, or specific software packages used in their own implementation of Fre DF, beyond mentioning Adam optimizer (which is an algorithm, not a software with a version number here). |
| Experiment Setup | Yes | Models are trained using the Adam optimizer (Kingma & Ba, 2015), with learning rates selected from the set 10 3, 5 10 4, 10 4 to minimize the MSE loss. The training is limited to a maximum of 10 epochs, incorporating an early stopping mechanism activated upon a lack of improvement in validation performance over 3 epochs. ... Finetuning the learning rate is essential to handle the different magnitude of temporal and frequency losses. |