Calibrated Physics-Informed Uncertainty Quantification

Authors: Vignesh Gopakumar, Ander Gray, Lorenzo Zanisi, Timothy Nunn, Daniel Giles, Matt Kusner, Stanislas Pamela, Marc Peter Deisenroth

ICML 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We further validate the efficacy of our method on neural PDE models for plasma modelling and shot design in fusion reactors. ... Section 5. Experiments
Researcher Affiliation Collaboration 1Centre for Artificial Intelligence, University College London 2Computing Division, UK Atomic Energy Authority 3Heudiasyc Laboratory 4Polytechnique Montr eal 5Mila Quebec AI Institute.
Pseudocode Yes C. Algorithmic Procedure 1. Set up the Neural PDE Solver (a) Define the PDE system of interest with its governing equations in a numerical solver (b) Train a neural network (e.g., Fourier Neural Operator) to approximate solutions to the PDE (c) Ensure the model can make predictions on new initial conditions / PDE coefficients
Open Source Code Yes The code and associated utility functions can be found in: https://github.com/gitvicky/CP-PRE
Open Datasets No The paper consistently describes generating data using specific solvers (e.g., "The dataset is generated using the JOREK code", "The solution for the Burgers equation is obtained by deploying a spectral solver") rather than providing access to pre-existing public datasets.
Dataset Splits Yes The dataset consists of 120 simulations (100 training, 20 testing) generated by solving the reduced MHD equations using JOREK with periodic boundary conditions.
Hardware Specification Yes The training was conducted on a single A100 GPU
Software Dependencies No The paper mentions software like PyTorch, TensorFlow, numpy, and Python, but does not provide specific version numbers for these or other libraries/solvers used.
Experiment Setup Yes Each model is trained for up to 500 epochs using the Adam optimiser (Kingma & Ba, 2015) with a step-decaying learning rate. The learning rate is initially set to 0.005 and scheduled to decrease by half after every 100 epochs.