PEINR: A Physics-enhanced Implicit Neural Representation for High-Fidelity Flow Field Reconstruction

Authors: Liming Shen, Liang Deng, Chongke Bi, Yu Wang, Xinhai Chen, Yueqing Wang, Jie Liu

ICML 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Qualitative and quantitative experiments demonstrate that PEINR outperforms state-of-the-art INR-based methods in reconstruction quality. Code and dataset are released here.
Researcher Affiliation Academia 1Laboratory of Digitizing Software for Frontier Equipment, National University of Defense Technology, Changsha, China 2National Key Laboratory of Parallel and Distributed Computing, National University of Defense Technology, Changsha, China 3Computational Aerodynamics Institute, China Aerodynamics Research and Development Center, Mianyang, China 4College of Intelligence and Computing, Tianjin University, Tianjin, China. Correspondence to: Liang Deng <EMAIL>.
Pseudocode No The paper describes the methodology using prose and diagrams (Figure 2) but does not include any explicitly labeled pseudocode or algorithm blocks.
Open Source Code Yes Qualitative and quantitative experiments demonstrate that PEINR outperforms state-of-the-art INR-based methods in reconstruction quality. Code and dataset are released here.
Open Datasets Yes Tacking these issues, we first introduce HFR-Bench, a 5.4 TB public large-scale CFD dataset with 33,600 unsteady 2D and 3D vector fields for reconstructing high-fidelity flow fields.
Dataset Splits Yes For uniform Cartesian meshes, flow fields from timesteps 460 to 480 (out of 500) are used for training, with the final 20 steps as test samples. Results for uniform meshes (FFS, RM, RT, SV) are shown for step 500. For non-uniform meshes, training samples are taken from steps 400 to 500 at intervals of 5, with the remaining non-multiples of 5 used for testing.
Hardware Specification Yes All experiments are conducted on a single NVIDIA A100 80GB GPU.
Software Dependencies Yes Our model is implemented with Pytorch 1.10 (Paszke et al., 2019), and all experiments are conducted on a single NVIDIA A100 80GB GPU.
Experiment Setup Yes Our method and all the baseline methods are trained with the MSE loss in 2000 epochs and every method including ours can converge within 1000 epochs. Learning ratio is set to 1e 5 and decrease after 20 epochs if there is no loss degradation and we adopt the Adam W optimizer for optimization. In spatial discretization, for 2D cases, we consider the nearest 4 points, and for 3D cases, the nearest 9 points. In temporal nonlinear encoding, we set the σ as 10 with time steps normalized to [0,1]. During design, we set the number of residual layers of a Resu MLP to 10 and the max number of neutrons in a Resu MLP is 64.