WaveBench: Benchmarking Data-driven Solvers for Linear Wave Propagation PDEs
Authors: Tianlin Liu, Jose Antonio Lara Benitez, Florian Faucher, AmirEhsan Khorashadizadeh, Maarten V. de Hoop, Ivan Dokmanić
TMLR 2024 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | In this paper, we present Wave Bench, a comprehensive collection of benchmark datasets for wave propagation PDEs. Wave Bench (1) contains 24 datasets that cover a wide range of forward and inverse problems for time-harmonic and time-varying wave phenomena in 2D; (2) includes a user-friendly Py Torch environment for comparing learning-based methods; and (3) comprises reference performance and model checkpoints of popular PDE surrogates such as U-Nets and Fourier neural operators. Our evaluation on Wave Bench demonstrates the impressive performance of PDE surrogates on in-distribution samples, while simultaneously unveiling their limitations on out-of-distribution (OOD) samples. This OOD-generalization limitation is noteworthy, especially since we use stylized wavespeeds and provide abundant training data to PDE surrogates. |
| Researcher Affiliation | Collaboration | Tianlin Liu University of Basel Jose Antonio Lara Benitez Rice University Florian Faucher Team Makutu, Inria, University of Pau and Pays de l Adour, Total Energies, CNRS, France Amir Ehsan Khorashadizadeh University of Basel Maarten V. de Hoop Rice University Ivan Dokmanić University of Basel |
| Pseudocode | No | The paper describes various methods and models like FNO and U-Nets but does not present them in pseudocode or algorithm blocks. The descriptions are narrative or refer to equations and figures. |
| Open Source Code | Yes | The datasets are in the beton format of FFCV (Leclerc et al., 2023), which is open-source software that provides high-throughput data loading for model training. Our datasets are accessible on Zenodo (an open platform for datasets sharing): https://zenodo.org/records/8015145, and the benchmark code is accessible through our Git Hub repository: https://github.com/wavebench/wavebench. |
| Open Datasets | Yes | To address the gap, we present Wave Bench, an extensive collection of benchmark datasets designed for wave propagation PDEs. Wave Bench includes 24 datasets, encompassing two categories of forward and inverse problems of acoustic waves: time-harmonic problems and time-varying problems. [...] We have made these datasets publicly accessible for researchers to access. Moreover, we provide a Py Torch (Paszke et al., 2019) environment that enables easy training and comparison between various PDE surrogate models. [...] Our datasets are accessible on Zenodo (an open platform for datasets sharing): https://zenodo.org/records/8015145 |
| Dataset Splits | Yes | Each dataset contains 49,000 training samples, 500 validation samples, and 500 test samples. For an overview of these time-harmonic datasets, see Table 1. [...] Each dataset based upon the initial pressures of thick lines consists of 9,000 training samples, 500 validation samples, and 500 testing samples. Each dataset based upon the initial pressures of MNIST is for out-of-distribution testing purposes, comprising of 500 testing samples. |
| Hardware Specification | Yes | The benchmarking procedure involved initial 10 dry runs followed by 100 test runs conducted on an 11 GB NVIDIA Ge Force RTX 2080 Ti GPU. [...] All experiments were conducted on an 11 GB NVIDIA Ge Force RTX 2080 Ti GPU. |
| Software Dependencies | Yes | Moreover, we provide a Py Torch (Paszke et al., 2019) environment that enables easy training and comparison between various PDE surrogate models. [...] The datasets are in the beton format of FFCV (Leclerc et al., 2023), which is open-source software that provides high-throughput data loading for model training. [...] To simulate wave propagation for both Reverse Time Continuation (RTC) and Inverse Scattering (IS) problems, we use the open-source j-wave package (Stanziola et al., 2023). |
| Experiment Setup | Yes | We trained and tested the baseline U-Net, FNO, and UNO models using the 20 datasets described in Section 3 and summarized in Table 1 and Table 3 in the appendix. For all datasets, we trained all models for 50 epochs using the Adam W optimizer (Loshchilov & Hutter, 2019). The learning rates were initially set to 1e-3 and then annealed to 1e-5 using the cosine annealing (Loshchilov & Hutter, 2017). We employed the relative L2 loss for training and evaluation in all our problems, following the approach in Li et al. (2021); de Hoop et al. (2022b). |