Number Theoretic Accelerated Learning of Physics-Informed Neural Networks

Authors: Takashi Matsubara, Takaharu Yaguchi

AAAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Our experiments demonstrate that GLT requires 2 7 times fewer collocation points, resulting in lower computational cost, while achieving competitive performance compared to typical sampling methods. Experiments and Results
Researcher Affiliation Academia Takashi Matsubara1, Takaharu Yaguchi2 1Hokkaido University 2Kobe University EMAIL, EMAIL
Pseudocode No The paper describes the methodology using mathematical formulations and descriptive text, but it does not include any clearly labeled pseudocode or algorithm blocks.
Open Source Code No The paper states: "We modified the code from the official repository1 of Raissi et al. (2019)" and provides a link to that repository: "1https://github.com/maziarraissi/PINNs (MIT license)". It also refers to "Supplementary Material at https://openreview.net/forum? id=z9SIj-IM7tn (MIT License)" in the context of CPINNs, which they modified. However, there is no explicit statement that the *authors' own implementation* of the proposed GLT methodology is open-sourced or provided.
Open Datasets Yes We obtained the datasets of the nonlinear Schr odinger (NLS) equation, Korteweg De Vries (Kd V) equation, and Allen-Cahn (AC) equation from the repository. The repository is referred to by a footnote: "1https://github.com/maziarraissi/PINNs (MIT license)".
Dataset Splits No The paper states: "Following Raissi et al. (2019), we evaluated the performance using the relative error, which is the normalized squared error L( u, u; xj) = (PNe 1 j=0 u(xj) u(xj) 2) 1 2 /(PNe 1 j=0 u(xj) 2) 1 2 at predefined Ne collocation points {xj}Ne 1 j=0 ." and "for 200,000 iterations and sampled a different set of collocation points at each iteration." While it describes how collocation points are used for training and evaluation, it does not provide traditional training/test/validation splits for a fixed dataset in terms of specific percentages, counts, or predefined files.
Hardware Specification Yes All experiments were conducted using Python v3.7.16 and tensorflow v1.15.5 (Abadi et al. 2016) on servers with Intel Xeon Platinum 8368. All experiments were conducted using Python v3.9.16 and Pytorch v1.13.1 (Paszke et al. 2017) on servers with Intel Xeon Platinum 8368 and NVIDIA A100.
Software Dependencies Yes All experiments were conducted using Python v3.7.16 and tensorflow v1.15.5 (Abadi et al. 2016) on servers with Intel Xeon Platinum 8368. All experiments were conducted using Python v3.9.16 and Pytorch v1.13.1 (Paszke et al. 2017) on servers with Intel Xeon Platinum 8368 and NVIDIA A100.
Experiment Setup Yes For Poisson s equation with s = 2, which gives the exact solutions, we followed the original learning strategy using the L-BFGS-B method preceded by the Adam optimizer (Kingma and Ba 2015) for 50,000 iterations to ensure precise convergence. For other datasets, which contain the numerical solutions, we trained PINNs using the Adam optimizer with cosine decay of a single cycle to zero (Loshchilov and Hutter 2017) for 200,000 iterations and sampled a different set of collocation points at each iteration.