Learning Partial Differential Equations in Reproducing Kernel Hilbert Spaces

Authors: George Stepaniants

JMLR 2023 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Finally, we demonstrate our method on several linear PDEs including the Poisson, Helmholtz, Schr odinger, Fokker Planck, and heat equation. We highlight its robustness to noise as well as its ability to generalize to new data with varying degrees of smoothness and mesh discretization without any additional training.
Researcher Affiliation Academia George Stepaniants EMAIL Department of Mathematics Massachusetts Institute of Technology 77 Massachusetts Ave, Cambridge, MA 02139
Pseudocode No The paper describes the methodology using mathematical equations and prose, but does not include any explicitly labeled pseudocode or algorithm blocks.
Open Source Code Yes All code and experiments can be found at https://github.com/sgstepaniants/Operator Learning.
Open Datasets No The input forcings fi are simulated using a Karhunen Loeve expansion (KLE) with a squared exponential kernel of lengthscale ℓ= 0.01 and the solutions ui are generated with a standard finite difference solver and corrupted with 10% Gaussian noise (see Appendix A for details).
Dataset Splits Yes We train the estimators βw, GW stochastically on batches of size 100 with n = 100-500 training pairs (Fi, Ui) Rmy, Rmx using between 100-1000 epochs such that the solution converges... In the bottom of Figure 4 we study how our learned Green s function estimator performs on 500 new test samples when we vary the lengthscale ℓof the boundary condition from 0.01 to 10.0 and the mesh discretization from m = 50 to 150.
Hardware Specification No We acknowledge the MIT Super Cloud and Lincoln Laboratory Supercomputing Center (Reuther et al., 2018) for providing HPC resources that have contributed to the numerical experiments reported within this paper.
Software Dependencies No Instead we efficiently evaluate these summations on GPUs with the Ke Ops Python libraries (Charlier et al., 2021) and obtain derivatives with respect to w, W which seamlessly integrate with the Py Torch automatic differentiation library (Paszke et al., 2019).
Experiment Setup Yes Optimization of these weights is performed by Adam with amsgrad, a popular gradient descent method... We train the estimators βw, GW stochastically on batches of size 100 with n = 100-500 training pairs (Fi, Ui) Rmy, Rmx using between 100-1000 epochs such that the solution converges. The loss function minimized by gradient descent on the training data is Loss(Tβw,GW) = MSE(Tβw,GW) + λP(βw) + ρJ(GW)... We set the regularization parameter to λ = 10 5 except for the noise experiments where we set λ = 10 3 to achieve better noise robustness.