Improved Sampling Of Diffusion Models In Fluid Dynamics With Tweedie's Formula

Authors: Youssef Shehata, Benjamin Holzschuh, Nils Thuerey

ICLR 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We empirically demonstrate the efficacy of the proposed methods in reducing inference steps and improving the accuracy of diffusion models for fluid dynamics simulations through a diverse set of experiments, including compressible and incompressible turbulent flows in both time-dependent and steady-state settings.
Researcher Affiliation Academia Youssef Shehata Benjamin Holzschuh Nils Thuerey Technical University of Munich 85748 Garching, Germany Correspondence to: EMAIL
Pseudocode Yes Algorithm 1 Truncated Ancestral Sampling for conditional TSMs Algorithm 2 IR sampling procedure
Open Source Code Yes 1The source code is available at https://github.com/tum-pbs/tsm-ir-diffusion.
Open Datasets Yes We consider two-dimensional (2D) fluid flow test scenarios, including compressible transonic flow (Tra), incompressible forced turbulence (Fturb), and steady-state airfoil turbulence uncertainty (Air), as shown in Fig. 1. Details regarding all datasets can be found in Appendix A. Appendix A: For detailed information regarding the generation of these datasets, please refer to the corresponding papers for Tra (Kohl et al., 2024) and Air (Liu and Thuerey, 2024).
Dataset Splits Yes Forced turbulence (Fturb). ...testing is split into interpolation (int: Re = {1750}) and extrapolation (ext: Re = {100, 5000}) regions. ...Table 3: Parameter values for all datasets. ... (includes training, test ext/int/long splits with values for Ma, Re, Sequences per Param, Total Sequences, R, Total Frames)
Hardware Specification Yes Training and sampling for all test cases were carried out using NVIDIA Ge Force RTX 2080 Ti GPU.
Software Dependencies No The paper mentions 'Adam W' as an optimizer and 'Φflow' as a framework, but does not provide specific version numbers for these or other key software components like programming languages or libraries.
Experiment Setup Yes The training hyperparameters used for all test cases are presented in Table 4. ... Table 4: Summary of the training hyperparameters employed in all test cases. Parameter: Batch size, Epochs, Learning rate (start, end), Learning rate schedule, Optimizer, Weight decay, EMA decay. ... Table 5: Network architecture and diffusion-related hyperparameters used in all test cases. Parameter: βstart, βend, Schedule.