CoPINN: Cognitive Physics-Informed Neural Networks

Authors: Siyuan Duan, Wenyuan Wu, Peng Hu, Zhenwen Ren, Dezhong Peng, Yuan Sun

ICML 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments demonstrate that our Co PINN achieves state-of-the-art performance, particularly significantly reducing prediction errors in stubborn regions. The code is available at this repository: https://github.com/siyuancncd/Co PINN.
Researcher Affiliation Academia 1College of Computer Science, Sichuan University, Chengdu, China. 2Southwest University of Science and Technology, Mianyang, China. 3National Key Laboratory of Fundamental Algorithms and Models for Engineering Numerical Simulation, Sichuan University, Chengdu, China. Correspondence to: Yuan Sun <sunyuan EMAIL>.
Pseudocode Yes A. Co PINN Training Algorithm Algorithm 1 Co PINN algorithm
Open Source Code Yes The code is available at this repository: https://github.com/siyuancncd/Co PINN.
Open Datasets No To show the performance of solving PDEs, we carry out experiments on six popular public datasets, including 1D Convection Equation, 3D (i.e., Diffusion Equation, Helmholtz Equation, (2+1)-d Klein-Gordon Equation, and Flow Mixing Problem) and 4D (i.e., (3+1)-d Klein-Gordon Equation) PDE systems. During training, we perform all experiments on different numbers of collocation points Nc, i.e., 163, 323, 643, 1283, and 2563. Due to space limitations, the experimental results of 1D Convection Equation and (2+1)-d Flow Mixing Problem are shown in the Appendix C.4 and Appendix C.5.
Dataset Splits No Following SPINN (Cho et al., 2023), we divide the dataset into a training set and a test set. The training set is used to train the neural network, while the test set is used to evaluate the prediction ability of the model.
Hardware Specification Yes All experiments are implemented in JAX/Flax and trained on a single NVIDIA 3090 GPU with 24GB of memory.
Software Dependencies No All experiments are implemented in JAX/Flax and trained on a single NVIDIA 3090 GPU with 24GB of memory.
Experiment Setup Yes For our Co PINN, the network architecture consists of five hidden layers, where each layer has 128 hidden units. We apply the modified MLP introduced in (Wang et al., 2021a) to Co INN. On all datasets, we exploit the Adam optimizer (Kingma & Ba, 2014) to train our model with a large learning rate of 1e-3 for 50,000 epochs. We use the tanh activation function throughout our Co PINN. According to the parameter analysis of our method, on the (2+1)-d Klein-Gordon dataset, we set β = 0.01. And on the Helmholtz, Diffusion, and (3+1)-d Klei-Gordon datasets, β is set to 0.001. To ensure a fair comparison, we set the balance parameter of the loss terms to be equal, i.e., λ = 1 in Equation (5).