Enabling Automatic Differentiation with Mollified Graph Neural Operators
Authors: Ryan Y. Lin, Julius Berner, Valentin Duruisseaux, David Pitt, Daniel Leibovici, Jean Kossaifi, Kamyar Azizzadenesheli, Anima Anandkumar
TMLR 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We test our approach on Burgers and Navier Stokes equations with regular grids, and on nonlinear Poisson and hyperelasticity equations with varying domain geometries. Figure 2 shows example solutions, highlighting the complexity of the geometries considered. Physics losses prove critical, emphasizing the need for accurate and efficient derivative computation. 4 Numerical Experiments |
| Researcher Affiliation | Collaboration | Ryan Y. Lin1, Julius Berner2 , Valentin Duruisseaux1, David Pitt1, Daniel Leibovici2, Jean Kossaifi2, Kamyar Azizzadenesheli2, Anima Anandkumar1 1Caltech 2NVIDIA |
| Pseudocode | Yes | An example of simplified pseudocode for the m GNO layer is provided in Appendix D. D m GNO Layer Pseudocode |
| Open Source Code | Yes | The Py Torch code used for the experiments presented in this paper has been added to the open-source neural operator library from Kossaifi et al. (2025) at https://github.com/neuraloperator/neuraloperator. |
| Open Datasets | Yes | We focus on the dataset of Li et al. (2021a;b) consisting of 800 instances of the Burgers equation... We use the same dataset as Li et al. (2022), where the shape parameterization of the airfoil follows the design element approach (Farin, 2014). |
| Dataset Splits | Yes | We used 7000 samples for training and 3000 samples for testing. |
| Hardware Specification | No | No specific hardware details (e.g., GPU/CPU models, memory) are provided in the paper for running the experiments. |
| Software Dependencies | No | The paper mentions that the models were 'trained in Py Torch' but does not specify a version number for Py Torch or any other software dependencies. |
| Experiment Setup | Yes | For this experiment, the 2D FNO has 4 layers, each with 26 hidden channels and (24, 24) Fourier modes, and we used a Tucker factorization with rank 0.6 of the weights. The m GNO uses the half_cos weight function with a radius of 0.1, and a 2-layer MLP with [64, 64] nodes. The resulting model has 1, 019, 569 trainable parameters, and was trained in Py Torch for 10, 000 epochs using the Adam optimizer with learning rate 0.002 and weight decay 10 6, and the Reduce LROn Plateau scheduler with factor 0.9 and patience 50. |