TC-Diffuser: Bi-Condition Multi-Modal Diffusion for Tropical Cyclone Forecasting

Authors: Shiqi Zhang, Pan Mu, Cheng Huang, Jinglin Zhang, Cong Bai

AAAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Comprehensive experiments were conducted using the China Meteorological Administration Tropical Cyclone Best Track Dataset (CMA-BST). Our method outperforms both the state-of-the-art deep learning model and the NWPs method used by the China Central Meteorological Observatory (CMO) across all metrics.
Researcher Affiliation Academia Shiqi Zhang1, Pan Mu1, Cheng Huang1, Jinglin Zhang2, Cong Bai1* 1College of Computer Science, Zhejiang University of Technology 2 School of Control Science and Engineering, Shangdong University EMAIL, EMAIL, EMAIL
Pseudocode No The paper describes methods and processes in paragraph text and flow diagrams (Figure 1), but no explicit pseudocode or algorithm blocks are provided.
Open Source Code Yes 1Code: https://github.com/Zjut-Multimedia Plus/TC-Diffuser
Open Datasets Yes We employed the dataset introduced by MGTCF (Huang et al. 2023), encompassing all the 1722 TCs data from 1950 to 2021 over the Western North Pacific (WP).
Dataset Splits Yes 80% of the TC data from 1950 to 2016 was allocated for training, 20% for validation, and the data from 2017 to 2021 were reserved for testing.
Hardware Specification Yes All experiments, including other deep learning methods used for comparison, were conducted on an NVIDIA RTX A6000 GPU
Software Dependencies No We deployed TC-Diffuser on the Py Torch framework. The specific version of PyTorch or other libraries is not mentioned.
Experiment Setup Yes Training was performed using Adam Optimizer with a learning rate of 0.001, a batch size of 256, and a duration of 10 hours. All experiments, including other deep learning methods used for comparison, were conducted on an NVIDIA RTX A6000 GPU, and the seeds in training and testing were fixed. We set the epoch to 270 based on model convergence criteria. For α, we followed the common practice in deep learning, tested four orders of magnitude: 0.1, 0.01, 0.001, 0.0001, and selected the best-performing, 0.001.