Sub-Sequential Physics-Informed Learning with State Space Model

Authors: Chenhui Xu, Dancheng Liu, Yuting Hu, Jiajie Li, Ruiyang Qin, Qingxiao Zheng, Jinjun Xiong

ICML 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental 7. Experiments We evaluate the performance of PINNMamba on three standard PDE benchmarks: convection, wave, and reaction equations... We compare PINNMamba with four baseline models... We also evaluate on PINNacle Benchmark... We train PINNMamba and all the baseline models 1000 epochs with L-BFGS optimizer... To evaluate the performance of the models, we take relative Mean Absolute Error (r MAE, a.k.a ℓ1 relative error) and relative Root Mean Square Error (r RMSE, a.k.a ℓ2 relative error).
Researcher Affiliation Academia 1University at Buffalo, SUNY 2University of Notre Dame. Correspondence to: Chenhui Xu <EMAIL>, Jinjun Xiong <EMAIL>.
Pseudocode No The paper describes its methodology using text, mathematical equations (e.g., Eq. 25-27 for PINNMamba block), and diagrams (Fig. 4 for PINNMamba Overview). However, it does not include a distinct section or figure explicitly labeled as "Pseudocode" or "Algorithm" with structured, code-like steps.
Open Source Code Yes Our code is available at https:// github.com/mini Hui Hui/PINNMamba. Our code and weights are available at https: //github.com/mini Hui Hui/PINNMamba.
Open Datasets No We evaluate the performance of PINNMamba on three standard PDE benchmarks: convection, wave, and reaction equations... We also evaluate on PINNacle Benchmark (Hao et al., 2024) and Navier Stokes equation (Raissi et al., 2019)... The 2-dimensional Navier-Stokes equation doesn’t have an analytical solution that can be described by existing mathematical symbols, we take Raissi et al. (2019)’s finite-element numerical simulation as ground truth.
Dataset Splits No For fair comparison, we sample 101 x 101 collocation points with uniformly grid sampling, following previous work (Zhao et al., 2024; Wu et al., 2024). ... where N is the number of test points, u(x, t) is the ground truth solution, and ˆu(x, t) is the model’s prediction.
Hardware Specification Yes All experiments are implemented in Py Torch 2.1.1 and trained on an NVIDIA H100 GPU. More training details are in Appendix D. ...even on the most advanced NVIDIA H100 GPU.
Software Dependencies Yes All experiments are implemented in Py Torch 2.1.1 and trained on an NVIDIA H100 GPU.
Experiment Setup Yes We train PINNMamba and all the baseline models 1000 epochs with L-BFGS optimizer (Liu & Nocedal, 1989). We set the sub-sequence length to 7 for PINNMamba, and keep the original pseudo-sequence setup for PINNs Formers. The weights of loss terms [λF, λI, λB] are set to [1, 1, 10] for all three equations, as we find that strengthening the boundary conditions can lead to better convergence. λalig is set to 1000 for convection and reaction equations, and auto-adapted by λF for wave equation. All experiments are implemented in Py Torch 2.1.1 and trained on an NVIDIA H100 GPU. More training details are in Appendix D. Our code and weights are available at https: //github.com/mini Hui Hui/PINNMamba.