Geometry-Informed Neural Networks
Authors: Arturs Berzins, Andreas Radler, Eric Volkmann, Sebastian Sanokowski, Sepp Hochreiter, Johannes Brandstetter
ICML 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experimentally, we apply GINNs to several problems spanning physics, geometry, and engineering design, showing control over geometrical and topological properties, such as surface smoothness or the number of holes. These results demonstrate the potential of training shapegenerative models without data, paving the way for new generative design approaches without large datasets. |
| Researcher Affiliation | Collaboration | 1LIT AI Lab, Institute for Machine Learning, JKU Linz, Austria [...] 3Emmi AI GmbH, Linz, Austria. Correspondence to: Arturs Berzins <EMAIL>. |
| Pseudocode | Yes | Algorithm 1 shows the full algorithm used to train for T epochs and specifies the hyperparameters we used. |
| Open Source Code | Yes | Code is available at https://github.com/ml-jku/GINNs-Geometry-informed-Neural-Networks |
| Open Datasets | Yes | A unique aspect of GINN is the data-free shape generative aspect. Comparison to classical TO is trivial since it is inherently limited to a single solution with null diversity. Instead, we use the sim JEB (Whalen et al., 2021) dataset to give an intuitive estimate of the diversity of the produced results. The dataset is due to the design challenge on a related problem described in Section 4.2. The shapes in the dataset were produced by human experts, many of whom also used topology optimization. To compute the diversity metric, we sample 196 clean shapes from the sim JEB dataset, producing a diversity of 0.099 |
| Dataset Splits | No | The core methodology of GINNs is designed for training shape-generative neural fields without data, hence no traditional dataset splits are applicable for the main approach. For the baseline comparison, the paper states: "we sample 196 clean shapes from the sim JEB dataset," which describes a sampling strategy but does not specify training/test/validation splits for this dataset. |
| Hardware Specification | Yes | We run all experiments on a single GPU (one of NVIDIA RTX2080Ti, RTX3090, A40, P40, or A100-SXM). |
| Software Dependencies | No | The paper mentions "RMSprop in Py Torch" and "CRipser (Kaji et al., 2020)" but does not provide specific version numbers for these software components, which is required for reproducibility. |
| Experiment Setup | Yes | Appendix B.1 describes the Reaction-Diffusion setup: "They have two hidden layers of widths 256 and 128... we use ω0 = 3.0 to initialize SIREN... We compute the gradients and the Laplacian using finite differences on a 64 × 64 grid... The generative PINNs are trained with Adam for 20000 epochs with a 10^-3 learning rate...". Appendix B.6 details the Jet Engine Bracket setup: "We train the WIRE with 3 × 128 hidden layers with Adam (default settings) and a learning rate scheduler 0.5^t/10000 for t = 10000 iterations for the single shape and t = 50000 for multiple shapes." |