Continuous Diffusive Prediction Network for Multi-Station Weather Prediction

Authors: Chujie Xu, Yuqing Ma, Haoyuan Deng, Yajun Gao, Yudie Wang, Kai Lv, Xianglong Liu

IJCAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments on the Weather2K and Global Wind/Temp datasets demonstrate that CDPNet outperforms state-of-the-art models.
Researcher Affiliation Academia 1State Key Laboratory of Complex & Critical Software Environment, Beihang University, China 2Institute of Artifcial Intelligence, Beihang University, China 3School of Computer Science and Engineering, Beihang University, China 4School of Computer Science & Technology, Beijing Jiaotong University, China EMAIL, EMAIL
Pseudocode Yes Algorithm 1 CDPNet training process
Open Source Code Yes The implementation code is publicly available at https://github.com/Chujie Xu/CDPNet.
Open Datasets Yes In this paper, experiments are carried out on two real datasets, including the Weather2K dataset [Zhu et al., 2023] and the Global Wind/Temp dataset [Wu et al., 2023].
Dataset Splits Yes Global Wind/Temp: It is from the National Centers for Environmental Information. This dataset contains the hourly averaged wind speed and hourly temperature of 3,850 stations around the world from January 1, 2019 to December 31, 2020. Like the setup in other papers, we split the dataset into training, validation, and test sets in chronological order by a ratio of 7:1:2. The task is set to predict one day in the future based on the past 2 days, where the input length is 48 steps and the prediction length is 24 steps.
Hardware Specification Yes Our model is implemented using Py Torch 2.1.0 and trained on an NVIDIA Ge Force RTX 2080 Ti GPU.
Software Dependencies Yes Our model is implemented using Py Torch 2.1.0 and trained on an NVIDIA Ge Force RTX 2080 Ti GPU.
Experiment Setup Yes We employ the Adam optimizer with a batch size of 1 to accommodate the large number of stations in our dataset. The training process consists of two distinct phases: first, we initialize the direction information by training for one epoch with a learning rate of 0.001, followed by training the entire network with a reduced learning rate of 0.00001. The model undergoes training for up to 100 epochs, with an early stopping mechanism implemented to prevent overfitting.