KP-PINNs: Kernel Packet Accelerated Physics Informed Neural Networks

Authors: Siyuan Yang, Cheng Song, Zhilu Lai, Wenjia Wang

IJCAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Numerical experiments illustrate that KP-PINNs can solve differential equations effectively and efficiently. This framework provides a promising direction for improving the stability and accuracy of PINNs-based solvers in scientific computing. Section 5 shows four numerical examples to demonstrate the performance of the KP-PINNs algorithm.
Researcher Affiliation Academia Siyuan Yang 1,2 , Cheng Song 1 , Zhilu Lai 2,3 and Wenjia Wang 1 1 Data Science and Analytics, The Hong Kong University of Science and Technology (Guangzhou) 2 Internet of Things, The Hong Kong University of Science and Technology (Guangzhou) 3 Department of CEE, The Hong Kong University of Science and Technology EMAIL, EMAIL, EMAIL
Pseudocode Yes Algorithm 1 KP-PINNs Input: PDE systems (1), iterations niter, known points Output: Approximated solution ˆupxq and equations parameters (if inverse problem) 1: Initialize neural network parameters. 2: while i ă niter do 3: Forward pass and compute the predicted value. 4: Compute derivatives using automatic differentiation. 5: Compute A and ϕpxq by (12) and (11). 6: Compute the loss of KP-PINNs based on (23). 7: Update parameters. 8: end while
Open Source Code Yes The implementation details and source code are available at: https://github.com/SiyuanYang-sy/KP-PINNs.
Open Datasets No The paper addresses solving differential equations and inverse problems using PINNs. It generates training data points based on the differential equations and boundary conditions rather than utilizing external, named public datasets.
Dataset Splits No For the Stiff equation: 'There is only one initial point so NB 1, and NL 50 in (7) in the forward problem. In the inverse problem, we assume that λ is unknown and set NB NL 50. In both cases Ntest is equal to 2000.' Similar descriptions for other equations specify the number of points used for different components (boundary conditions, loss calculation, testing) but not explicit train/test/validation splits of a static dataset.
Hardware Specification No The paper does not explicitly mention any specific hardware specifications (e.g., GPU/CPU models, memory) used for running the experiments.
Software Dependencies No The paper refers to using deep learning, neural networks, optimization algorithms like Adam and L-BFGS, and automatic differentiation. However, it does not provide specific software names with version numbers (e.g., Python, PyTorch, TensorFlow versions, or CUDA versions) required to replicate the experiments.
Experiment Setup No The paper specifies the number of points used for boundary conditions (NB), loss calculation (NL), and testing (Ntest) for each experiment. It also provides specific parameters for the differential equations being solved (e.g., 'λ = -2.0 and µ = 2.0' for the Stiff equation). However, it lacks crucial deep learning hyperparameters such as learning rate, batch size, detailed optimizer configurations (beyond naming Adam or L-BFGS), neural network architecture specifics (number of layers, neurons, activation functions), or training schedules.