Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1]

Multilevel CNNs for Parametric PDEs

Authors: Cosmas Heiß, Ingo Gühring, Martin Eigel

JMLR 2023 | Venue PDF | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Numerical Results: We conduct an in-depth numerical study of our method and, more generally, UNet-based approaches on common benchmark problems for parametric PDEs in UQ. Strikingly, the tested methods show significant improvements over the state-of-the-art NN architectures for solving parametric PDEs.
Researcher Affiliation Academia Cosmas Heiß EMAIL Department of Mathematics Technical University of Berlin Berlin, Germany; Ingo Gühring EMAIL Machine Learning Group Technical University of Berlin Berlin, Germany; Martin Eigel EMAIL Weierstraß Institute Berlin, Germany. All listed institutions (Technical University of Berlin and Weierstraß Institute) are academic or public research institutions.
Pseudocode Yes Algorithm 3.1 V-cycle function
Open Source Code No The paper mentions using open-source packages FEniCS and PyTorch, but does not state that the code for their own methodology (ML-Net or UNet Seq) is open-source or provide any link to it.
Open Datasets No The paper describes generating training data using FE simulations for various problem settings (e.g., Uniform smooth field, Log-normal smooth field, Cookie problem), but does not provide concrete access information (link, DOI, repository) to any specific pre-existing or generated dataset.
Dataset Splits Yes If not stated otherwise, we use a training data set with 104 samples and 1024 samples for independent validation computed from i.i.d. parameter vectors. For testing, the performance of the methods is evaluated on 1024 independently generated test samples.
Hardware Specification Yes Training took approximately 33 GPU-hours on an NVIDIA Tesla P100 for ML-Net and 46 GPU-hours for UNet Seq (see also Remark 10).
Software Dependencies No We use the open source package FEni CS (Logg and Wells, 2010) with the GMRES (Saad and Schultz, 1986) solver for carrying out the FE simulations to generate training data and Py Torch (Paszke et al., 2019) for the NN models. Specific version numbers for these software components are not provided.
Experiment Setup Yes ML-Net and UNet Seq are trained for 200 epochs with an initial learning rate of 10-3 during the first 60 epochs. The learning rate is then linearly decayed to 2x10-5 over the next 100 epochs, where it was held for the rest of the training. Due to memory constraints, the batch sizes were chosen to be 20 for ML-Net and 16 for UNet Seq. The Adam optimizer (Kingma and Ba, 2015) was used in the standard configuration with parameters β1 = 0.99, β2 = 0.999, and without weight decay.