Learning Distributions of Complex Fluid Simulations with Diffusion Graph Networks

Authors: Mario Lino, Tobias Pfaff, Nils Thuerey

ICLR 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We apply this method to a range of fluid dynamics tasks, such as predicting pressure distributions on 3D wing models in turbulent flow, demonstrating both accuracy and computational efficiency in challenging scenarios. Our main findings indicate that DGNs and LDGNs can generate high-quality fields and accurately reproduce the distribution of converged states, even when trained on incomplete distributions. Both models outperformed the baselines a vanilla GNN, a Bayesian GNN, a Gaussian regression GNN, a Gaussian mixture GNN, and a VGAE in terms of sample accuracy and distributional accuracy. LDGNs showed improvement over DGNs, particularly in distributional accuracy and suppressing undesired high-frequency noise.
Researcher Affiliation Collaboration Mario Lino1 Tobias Pfaff2 Nils Thuerey1 1Technical University of Munich 2Google Deep Mind
Pseudocode Yes Algorithm 1 Guillard s coarsening algorithm (Guillard, 1993) 1: mask ones(|Vℓ|) Vector of size |Vℓ| filled with ones 2: for node i Vℓdo Iterate node-by-node 3: if mask[i] = 1 then If first visit to node i then this node is not dropped 4: for node j N i do 5: mask[j] 0 The incoming neighbours are dropped 6: end for 7: end if 8: end for 9: Vℓ+1 Vℓ[mask]
Open Source Code Yes 1Code is available at https://github.com/tum-pbs/dgn4cfd. The implementation of our models and baselines, including their weights, and demonstration scripts are available at https://github.com/tum-pbs/dgn4cfd.
Open Datasets Yes The datasets used in our ELLIPSEFLOW and ELLIPSE experiments are available at https://huggingface. co/datasets/mariolinov/Ellipse, and the datasets used in our WING experiments are available at https: //huggingface.co/datasets/mariolinov/Wing.
Dataset Splits Yes The training dataset contains 5000 simulations, while each test dataset comprises 50 simulations sampled from the original datasets in Lino et al. (2022). ... The training dataset consists of 1,000 simulations, and the test dataset consists of 16 simulations.
Hardware Specification Yes The runtimes were measured on a CPU, limited to 8 threads, and on a single RTX 3080 GPU.
Software Dependencies Yes We generated the training and test datasets for the WING systems using the PISO solver of Open FOAM with the Spalart-Allmaras Delayed Detached Eddy Simulation turbulence model (Open FOAM Foundation, 2022). ... Open FOAM version 10, 2022. URL https://www.openfoam.com.
Experiment Setup Yes The initial learning rate was set to 10-4 and was reduced by a factor of 10 when the training loss plateaued for n consecutive epochs. In the ELLIPSE and ELLIPSEFLOW tasks, we used n = 50, while in the WING task, n was set to 250 due to the shorter length of the training dataset. ...Through grid search, we selected three Gaussian components for the ELLIPSE and ELLIPSEFLOW tasks, and five components for the WING task. ...For the ELLIPSE task, we set the size of the latent features to FL = 1 and performed a grid search to find the optimal weight for the KL term (Kingma et al., 2015), which was determined to be 0.001 for maximizing distributional accuracy on the ELLIPSE-INDIST dataset.