Generating Physical Dynamics under Priors
Authors: Zihan Zhou, Xiaoxue Wang, Tianshu Yu
ICLR 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Empirical evaluations demonstrate that our method produces high-quality dynamics across a diverse array of physical phenomena with remarkable robustness, underscoring its potential to advance data-driven studies in AI4Physics. Our contributions signify a substantial advancement in the field of generative modeling, offering a robust solution to generate accurate and physically consistent dynamics. |
| Researcher Affiliation | Collaboration | Zihan Zhou1, Xiaoxue Wang2, Tianshu Yu1 1School of Data Science, The Chinese University of Hong Kong 2Chem Lex Technology Co., Ltd. EMAIL, EMAIL EMAIL |
| Pseudocode | No | The paper describes methods and algorithms verbally and mathematically, but does not include any explicitly labeled 'Pseudocode' or 'Algorithm' blocks or figures for its own methodology. It refers to 'Algorithm 2 in Lu et al. (2022)' but this is an external reference, not an algorithm presented within this paper. |
| Open Source Code | No | The paper does not contain any explicit statement about making their source code available, nor does it provide a link to a code repository. Phrases like 'We release our code...' or links to GitHub are absent. |
| Open Datasets | Yes | In Fig. 1, we provide a generated sample of the shallow water dataset (Mart ınez-Aranda et al., 2018). ... PDE datasets, including advection (Zang, 1991), Darcy flow (Li et al., 2024), Burgers (Rudy et al., 2017), and shallow water (Kl ower et al., 2018), are fundamental resources for studying and modeling various physical phenomena. ... We train diffusion models to simulate the dynamics of chaotic three-body systems in 3D (Zhou & Yu, 2023) and five-spring systems in 2D (Kuramoto, 1975; Kipf et al., 2018). |
| Dataset Splits | No | Appendix D.1 'DATASET INTRODUCTION' states: 'For both datasets, we generated 50k samples for training.' This indicates the total number of samples for training but does not provide specific train/test/validation splits (e.g., percentages, absolute counts, or predefined standard splits). |
| Hardware Specification | Yes | We conduct experiments of advection, Darcy flow, three-body, and five-spring datasets on NVIDIA Ge Force RTX 3090 GPUs and Intel(R) Xeon(R) Silver 4210R CPU @ 2.40GHz CPU. For the rest of the datasets, we conduct experiments on NVIDIA A100-SXM4-80GB GPUs and Intel(R) Xeon(R) Platinum 8358P CPU @ 2.60GHz CPU. |
| Software Dependencies | No | The paper mentions specific optimizers and schedulers ('Adam optimizer', 'Reduce LROn Plateau') and diffusion solvers ('DPM-Solver-1', 'DPM-Solver-3'), but it does not specify versions for general software libraries or frameworks (e.g., Python, PyTorch, TensorFlow versions, CUDA). |
| Experiment Setup | Yes | We use the Adam optimizer for training, with a maximum of 1000 epochs. We set the learning rate to 1e-3 and betas to 0.95 and 0.999. The learning rate scheduler is Reduce LROn Plateau with factor=0.6 and patience=10. When the learning rate is less than 5e-7, we stop the training. As for diffusion configuration, we set the steps of the forward diffusion process to 1000, the noise scheduler to σt = sigmoid(linspace( 5, 5, 1000)) and αt = p 1 σ2 t . The loss weight w(t) is set to g2(t) = dσ2 t dt 2 d log αt dt σ2 t (Song et al., 2021). We generate samples using the DPM-Solver-1 (Lu et al., 2022). Table 7: Summary of the model hyperparameters lists backbone, hidden size, layers, and batch size for each dataset. |