STLDM: Spatio-Temporal Latent Diffusion Model for Precipitation Nowcasting
Authors: Shi Quan Foo, Chi-Ho Wong, Zhihan Gao, Dit-Yan Yeung, Ka-Hing Wong, Wai-Kin Wong
TMLR 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experimental results on multiple radar datasets demonstrate that STLDM achieves superior performance compared to the state of the art, while also improving inference efficiency. |
| Researcher Affiliation | Academia | Shi Quan Foo EMAIL The Hong Kong University of Science and Technology; Chi-Ho Wong EMAIL The Hong Kong University of Science and Technology; Zhihan Gao EMAIL The Hong Kong University of Science and Technology; Dit-Yan Yeung EMAIL The Hong Kong University of Science and Technology; Ka-Hing Wong EMAIL Hong Kong Observatory; Wai-Kin Wong EMAIL Hong Kong Observatory |
| Pseudocode | No | The paper describes the methodology in detail and provides architectural diagrams (e.g., Figure 2, Figure 4, Figure 5), but it does not include any explicitly labeled pseudocode or algorithm blocks. |
| Open Source Code | Yes | The code is available in https://github.com/sqfoo/stldm_official. |
| Open Datasets | Yes | We evaluate the performance and effectiveness of our proposed STLDM with several deterministic models serving as baselines, together with various diffusion-based models designed for precipitation nowcasting: LDCast (Leinonen et al., 2023), Pre Diff (Gao et al., 2023), and Diff Cast (Yu et al., 2024), on three real-life radar datasets: SEVIR (Veillette et al., 2020), HKO-7 (Shi et al., 2015), and Meteo Net (Larvor et al., 2020). |
| Dataset Splits | Yes | SEVIR (Veillette et al., 2020) ... We span the data collected from June to December 2019 as the test set, while the remaining is the training set. HKO-7 (Shi et al., 2015) ... We sample the collected data from 2009 to 2014 as the training set, while the rest is allocated to the test set. Meteo Net (Larvor et al., 2020) ... The data collected from June to December 2018 serves as the test set, while the rest are used as the training set. |
| Hardware Specification | Yes | To judge the model efficiency during the inference, we report the prediction time per sample, Tsample, on a single RTX3090 GPU. |
| Software Dependencies | No | The paper mentions various models and techniques used (e.g., Conv LSTM, Pred RNN, Sim VP, Earthformer, DDIM, Classifier-Free Guidance), but it does not provide specific software dependencies with version numbers (e.g., Python, PyTorch, TensorFlow versions or specific library versions). |
| Experiment Setup | Yes | We trained the models for 200k training steps in total on all benchmarks with a batch size of 4. The learning rate is scheduled with a 2k steps warm-up period, followed by a Cosine Annealing Scheduler decaying from the peak learning rate of 1e 4. Besides that, we set the total sampling steps of STLDM to 50. During the inference process, we employ the DDIM technique (Song et al., 2021) of 20 sampling steps and the Classifier-Free Guidance (Ho & Salimans, 2022) with the strength of 1.0. |