PostCast: Generalizable Postprocessing for Precipitation Nowcasting via Unsupervised Blurriness Modeling
Authors: Junchao Gong, Siwei Tu, Weidong Yang, Ben Fei, Kun Chen, Wenlong Zhang, Xiaokang Yang, Wanli Ouyang, LEI BAI
ICLR 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments are conducted on 7 precipitation radar datasets, demonstrating the generality and superiority of our method. Our code is available at https://github.com/jasong-ovo/Post Cast. 1 INTRODUCTION |
| Researcher Affiliation | Collaboration | Junchao Gong1, 2 , Siwei Tu2, 3 , Weidong Yang2, 3 , Ben Fei2, 4 , Kun Chen2, 3, Wenlong Zhang2 Xiaokang Yang1, Wanli Ouyang2, Lei Bai2 1Shanghai Jiao Tong University 2Shanghai Artificial Intelligence Laboratory 3Fudan University 4The Chinese University of Hong Kong |
| Pseudocode | Yes | Algorithm 1 Guided diffusion model with the guidance of blurry prediction y . An unconditional diffusion model ϵθ(xt, t) fine-tuned on 5 datasets is given. |
| Open Source Code | Yes | Our code is available at https://github.com/jasong-ovo/Post Cast. |
| Open Datasets | Yes | Five datasets, including SEVIR (Veillette et al., 2020), HKO7 (Shi et al., 2017), TAASRAD19 (Franch et al., 2020), Shanghai (Chen et al., 2020), and SRAD2018 (SRAD, 2018), are selected to train the unconditional DDPM, while the other datasets (SCWDS CAP30 (Na et al., 2021), SCWDS CR (Na et al., 2021), Meteo Net (Larvor & Berthomier, 2021)) are prepared for out-of-distribution testing to evaluate the generalization of each method. |
| Dataset Splits | Yes | We use weather events in 2017 and 2018 for training and events in 2019 for validation and testing. ... Observations in the years 2009-2014 are split for training and validation, and data from 2015 are used for testing. ... The radar echoes from 2010-2018 are split into training part, and those of 2019 are used for validation and testing. ... We use data from 2015 to 2017 for training, and 2018 for validation and testing. ... We follow (SRAD, 2018) to split the dataset for training, validation, and testing. ... We train the models with images from 2016 to 2017 and validate or test the models with images of 2018. ... Images from 2016 and 2017 are used for training, and those from 2018 are processed for validation and testing. |
| Hardware Specification | No | The paper does not explicitly describe the hardware used for its experiments. |
| Software Dependencies | No | The paper mentions training details like "Adam W with β1 = 0.9 and β2 = 0.999" and fine-tuning an "unconditional diffusion model on Image Net from (Nichol & Dhariwal, 2021)", but does not provide specific version numbers for software dependencies or libraries used. |
| Experiment Setup | Yes | We uniformly resize the radar images from all datasets to 256 256. ... We utilize the pre-trained unconditional diffusion model on Image Net for better initialization and fine-tune it on SEVIR, HKO7, TAARSARD19, Shanghai, and SRAD2018 using Adam W with β1 = 0.9 and β2 = 0.999. Post Cast uses a blur kernel with a size of 9 9. To recover the prediction with a distribution of real observation, we implement our method with 1000-step DDPM. The cosine learning rate policy is used with initial learning rates 0.0002 for Post Cast and the βt we utilize undergoes a linear increase from β1 = 10 4 to βT = 0.02. |