PIG: Physics-Informed Gaussians as Adaptive Parametric Mesh Representations
Authors: Namgyu Kang, Jaemin Oh, Youngjoon Hong, Eunbyung Park
ICLR 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experimental results show the competitive performance of our model across various PDEs, demonstrating its potential as a robust tool for solving complex PDEs. We have tested the proposed method on an extensive set of challenging PDEs (Raissi et al., 2019; Wang et al., 2021; Kang et al., 2023; Wang et al., 2023; Cho et al., 2024; Wang et al., 2024b). The experimental results show that the proposed PIG achieved competitive accuracy compared to the existing methods that use large MLPs or high-resolution parametric grids. We conducted extensive numerical experiments on various challenging PDEs, including Allen-Cahn, Helmholtz, Nonlinear Diffusion, Flow Mixing, and Klein-Gordon equations (for more experiments, please refer to the Appendix). |
| Researcher Affiliation | Academia | Namgyu Kang Department of Artificial Intelligence Yonsei University Jaemin Oh Department of Mathematical Sciences KAIST Youngjoon Hong Department of Mathematical Sciences Seoul National University Eunbyung Park Department of Artificial Intelligence Yonsei University |
| Pseudocode | No | The paper describes the methodology in Section 3 and provides a diagram in Figure 3 titled "PIG as a neural network.", which illustrates the architecture. However, it does not contain a distinct section or figure explicitly labeled "Pseudocode" or "Algorithm" with structured, code-like steps for a procedure. |
| Open Source Code | Yes | Our project page is available at https: //namgyukang.github.io/Physics-Informed-Gaussians/. We already submitted the codes and command lines to reproduce the part of the results in Table 1 as supplementary materials. The code and datasets will be made publicly available upon publication, allowing others to validate our findings and build upon our work. |
| Open Datasets | Yes | REPRODUCIBILITY: We are committed to ensuring the reproducibility of our research. All experimental procedures, data sources, and algorithms used in this study are clearly documented in the paper. We already submitted the codes and command lines to reproduce the part of the results in Table 1 as supplementary materials. The code and datasets will be made publicly available upon publication, allowing others to validate our findings and build upon our work. |
| Dataset Splits | No | The paper focuses on solving Partial Differential Equations (PDEs) using Physics-Informed Neural Networks. The 'data' in this context typically refers to the problem definition itself, boundary/initial conditions, and collocation points for training, rather than a pre-existing dataset that is split into training, validation, and test sets in a conventional machine learning sense. The paper mentions using 'collocation points' and evaluating against a 'reference solution,' but it does not specify explicit train/validation/test dataset splits with percentages or sample counts. |
| Hardware Specification | No | The paper does not provide specific hardware details such as GPU models, CPU types, or memory specifications used for running the experiments. It mentions 'GPU memory' in the limitations section and 'computational costs per iteration' but without naming any specific hardware. |
| Software Dependencies | No | The paper mentions several software components and optimizers, such as "Adam optimizer (Kingma & Ba, 2014)", "L-BFGS optimizer (Liu & Nocedal, 1989)", "Chebfun (Driscoll et al., 2014)", "Tensor Flow (2015)", "JAX (2018)", and "Pytorch (2019)". However, it does not specify the exact version numbers of these software or libraries that were used for the experiments, which is required for reproducible software dependency information. |
| Experiment Setup | Yes | We used the Adam optimizer (Kingma & Ba, 2014) for all equations except for the Helmholtz equation, in which the L-BFGS optimizer (Liu & Nocedal, 1989) was applied for a fair comparison to the baseline method PIXEL. For computational efficiency, we considered a diagonal covariance matrix Σ = diag(σ2 1, . . . , σ2 d) and we will discuss non-diagonal cases in Section 4.3.3. We used N = 4000 Gaussians for training and a diagonal covariance matrix for parameter efficiency, where the diagonal elements of the initial Σ were set to a constant value of 0.025. The µi was uniformly initialized following Uniform[0, 2]2. We used shallow MLP with one hidden layer with 16 hidden units, and the dimension of the Gaussian feature was k = 1. |