PRDP: Progressively Refined Differentiable Physics

Authors: Kanishk Bhatia, Felix Koehler, Nils Thuerey

ICLR 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We validate its performance on a variety of learning scenarios involving differentiable physics solvers such as inverse problems, autoregressive neural emulators, and correction-based neural-hybrid solvers. In the challenging example of emulating the Navier-Stokes equations, we reduce training time by 62%. Our experiments focus on efficiently training neural networks with differentiable linear solvers in the loop. We address unrolled as well as implicit differentiation methods, showing that PRDP applies effectively to both. The approach is tested on training tasks across a range of PDE problems, including the Poisson, heat diffusion, Burgers, and Navier-Stokes equations.
Researcher Affiliation Academia Kanishk Bhatia*, Felix Koehler*1, Nils Thuerey Technical University of Munich EMAIL
Pseudocode Yes The exact algorithm is detailed in pseudocode in Algorithm 4. Its implementation in a training pipeline that uses differentiable physics is represented by the should_refine function in listing 1. Algorithm 4: Determine Whether to Refine Physics. Listing 1: A typical mixed-chain learning pipeline.
Open Source Code Yes Additionally, the full source code for our experiments is available at https://github.com/tum-pbs/PRDP.
Open Datasets No The initial conditions are generated as a truncated Fourier series. For 1D, we sum the first 5 sine and cosine modes defined on the unit interval. For 2D, we use the products of the first 5 sine and cosine modes defined as: u0(x, y) = Xn (an sin(2nπx) sin(2nπy) + bn cos(2nπx) cos(2nπy)+ cn sin(2nπx) cos(2nπy) + dn cos(2nπx) sin(2nπy)) We similarly extend this procedure to 3D. All amplitudes are randomly sampled from a uniform distribution U( 1, 1). ... 205 samples are generated for training, with a train:validation split of 200:5.
Dataset Splits Yes 205 samples are generated for training, with a train:validation split of 200:5. ... 205 trajectories are generated for training, with a train:validation split of 200:5.
Hardware Specification No The paper discusses computational costs and training times but does not specify the hardware (e.g., CPU, GPU models) used for running the experiments.
Software Dependencies No For the GMRES, we used the version of JAX1. ... which can be done with JAX (Bradbury et al., 2018). ... The discretizations are implemented matrix-free in JAX (Bradbury et al., 2018). ... we use the Adam optimizer from the Optax library (Deep Mind et al., 2020). ... We use our own implementation of the architecture using the Equinox library (Kidger & Garcia, 2021).
Experiment Setup Yes One-dimensional parameter space We use θr = 2.0, and an initial guess for gradient descent θinit = 5.0. 170 update steps are performed with a constant learning rate of 275. For PRDP, we set the control parameter values to τstep = 0.92, τstop = 0.98, δ = 2 . Training was started with K0 = 25 linear solver iterations. At every refinement, it was incremented by K = 10. ... The learning rate is scheduled as exponentially decaying with an initialization of 10 3, decay rate of 0.94 for 1D (0.9 for 2D), and 100 transition steps, while for 3D, the initialization is 10 4, with a decay rate of 0.92 and 100 transition steps. We train in mini-batches of 25 samples per iteration and for a total of 70 epochs in the 1D case, and 100 epochs in the 2D and 3D cases.