Fourier PINNs: From Strong Boundary Conditions to Adaptive Fourier Bases

Authors: Madison Cooley, Varun Shankar, Mike Kirby, Shandian Zhe

TMLR 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We show the advantage of our approach through a set of systematic experiments. (...) Section 6 details our numerical experiments and findings, including assessments of computational cost and accuracy compared to baseline methods.
Researcher Affiliation Academia Madison Cooley EMAIL Scientific Computing and Imaging Institute University of Utah Varun Shankar EMAIL Scientific Computing and Imaging Institute University of Utah Robert M. Kirby EMAIL Scientific Computing and Imaging Institute University of Utah Shandian Zhe EMAIL Kahlert School of Computing University of Utah
Pseudocode Yes Algorithm 1: Adaptive Basis Selection Hybrid Least Squares/Gradient Descent
Open Source Code Yes Code is available at https://github.com/Var Shankar/Kernel Pack/tree/sciml
Open Datasets No The paper uses manufactured solutions for PDEs (e.g., u(x) = sin(kx), u(x) = sin(100x), etc.) from which boundary conditions and forcing functions are derived. It does not use external, pre-existing publicly available datasets.
Dataset Splits Yes All networks used the Tanh activation and are trained using 10K equally spaced collocation points sampled from the domain. Relative ℓ2 errors are reported on a separate testing set of 20K points. (...) Table 5: Hyper-parameter configurations for 1D Poisson and 1D Steady-State Allen-Cahn experiments. Collocation points 10K Testing points 20K
Hardware Specification Yes Results are averaged over five random trials and were conducted on a Ge Force RTX 3090 GPU with CUDA version 12.3, running on Ubuntu 20.04.6 LTS.
Software Dependencies Yes All experiments used 32-bit floating-point precision (float32)... Results are averaged over five random trials and were conducted on a Ge Force RTX 3090 GPU with CUDA version 12.3, running on Ubuntu 20.04.6 LTS. (...) All code is implemented using the Py Torch C++ library Paszke et al. (2019).
Experiment Setup Yes Each model was trained for 100K iterations using the Adam optimizer (Kingma & Ba, 2015) with an initial learning rate of 10 3, decaying exponentially by 0.9 every 1000 iterations, followed by L-BFGS optimization until convergence (tolerance 10 9). (...) Table 3 in the Appendix provides detailed hyperparameters and experimental design information.