Enhancing Physics-Informed Neural Networks Through Feature Engineering

Authors: Shaghayegh Fazliani, Zachary Frangella, Madeleine Udell

TMLR 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Numerical results show that SAFE-NET converges faster and typically outperforms deeper networks and more complex architectures. It consistently uses fewer parameters on average, 53% fewer than the competing feature engineering methods and 70-100 fewer than state-of-the-art large-scale architectures while achieving comparable accuracy in less than 30% of the training epochs.
Researcher Affiliation Academia Shaghayegh Fazliani EMAIL Department of Mathematics, Stanford University, Stanford, CA, USA Zachary Frangella EMAIL Department of Management Science & Engineering, Stanford University, Stanford, CA, USA Madeleine Udell EMAIL Department of Management Science & Engineering, Stanford University, Stanford, CA, USA
Pseudocode No The paper describes the methodology using prose, mathematical equations, and architectural diagrams (e.g., Figure 2) but does not include any clearly labeled 'Pseudocode' or 'Algorithm' blocks.
Open Source Code No The paper does not contain an explicit statement or a link indicating that the source code for the methodology described in this paper is publicly available.
Open Datasets Yes Depending on availability, the datasets for these tasks are either from PDE benchmarks such as PDEBench Takamoto et al. (2024) and PINNacle Hao et al. (2023) or implemented directly if unavailable online. More details on each PDE and its source are provided in Appendix B.
Dataset Splits Yes For data sampling, we employ the standard PINN mesh-free approach with scattered collocation points distributed throughout the computational domain. Specifically, we use 20k randomly sampled collocation points within the interior domain for PDE residual evaluation, and 2k points sampled along each boundary segment for boundary condition enforcement. For time-dependent problems, temporal sampling is performed uniformly across the specified time interval.
Hardware Specification Yes All experiments are implemented in Py Torch 2.0.0 and executed on NVIDIA RTX 3090 24GB GPU.
Software Dependencies Yes All experiments are implemented in Py Torch 2.0.0 and executed on NVIDIA RTX 3090 24GB GPU.
Experiment Setup Yes For all experiments, we maintain consistent architectural and training configurations across all methods to ensure fair comparison. Unless stated otherwise in the method-specific sections below, all PINN-based models utilize a fully connected neural network architecture with 4 hidden layers, each containing 50 neurons, and employ the tanh activation function. Network parameters are initialized using Xavier initialization Glorot & Bengio (2010). ... The weighting parameters are set as λr = 1, λic = 100, and λbc = 100 to ensure proper enforcement of initial and boundary constraints. Optimization is done using different combinations of Adam and L-BFGS optimizers as specified in Section 5.2 as Optimization Schedule (1) and Optimization Schedule (2).