Geometric and Physical Constraints Synergistically Enhance Neural PDE Surrogates

Authors: Yunfei Huang, David S. Greenberg

ICML 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Here we introduce novel input and output layers that respect physical laws and symmetries on the staggered grids, and for the first time systematically investigate how these constraints, individually and in combination, affect the accuracy of PDE surrogates. We focus on two challenging problems: shallow water equations with closed boundaries and decaying incompressible turbulence. Compared to strong baselines, symmetries and physical constraints consistently improve performance across tasks, architectures, autoregressive prediction steps, accuracy measures, and network sizes. ... We compare our unconstrained and physics/symmetry-constrained SWE surrogates, with their noisy variants p1/ , p1/ + ϵ (Fig. 16). ... Over 30 random initial conditions, we computed accuracy metrics for these drnet surrogates (Fig. 25).
Researcher Affiliation Academia 1Helmholtz Centre Hereon, Geesthacht, Germany 2Helmholtz AI. Correspondence to: Yunfei Huang <EMAIL>, David S. Greenberg <EMAIL>.
Pseudocode No The paper describes methods and equations but does not present any distinct pseudocode or algorithm blocks.
Open Source Code Yes Code is available at https://github.com/m-dml/double-constraint-pde-surrogates.
Open Datasets Yes We used the ocean current velocity data was sourced from the Global Ocean Physics Analysis and Forecast (Marullo et al., 2014), and followed (Wang et al., 2021) for data selection and processing.
Dataset Splits Yes SWEs: We trained on 50 simulations spanning 50 h (600 time steps) each. ICs were ζ = 0 except for a 0.1 m high square-shaped elevation, and [u, v] = 0. ... Testing and validation data included 10 simulations. INS: We trained on 100 ICs consisting of filtered Gaussian noise with peak spectral density at wavenumber 10 (that is, 10 cycles across the spatial domain). We used 10 initial conditions for testing and validation.
Hardware Specification Yes We trained on 2 A100 GPUs with the ADAM optimizer (Kingma, 2014)... Table 9. Inference time per time step of p1/ and p4m/M+ρ u on CPU (Intel Xeon Platinum 8160) and GPU (Nvidia A100 40 GB) nodes for various network sizes.
Software Dependencies No The paper mentions using `escnn` for internal layers and `jax-cfd` for numerical solutions but does not provide specific version numbers for these or other software libraries (e.g., Python, PyTorch, TensorFlow).
Experiment Setup Yes We trained neural surrogates using a MSE loss L = 1 N b wt+1 wt+1 2 2... We trained on 2 A100 GPUs with the ADAM optimizer (Kingma, 2014), batch size 32 and initial learning rate 1e-4. We employed early stopping when validation loss did not reduce for 10 epochs, and accepted network weights with the best validation loss throughout the training process. ... Table 8 describes network hyperparameters, and how these were adjusted depending on the chosen symmetry group in order to match total parameter counts.