Neural Implicit Flow: a mesh-agnostic dimensionality reduction paradigm of spatio-temporal data

Authors: Shaowu Pan, Steven L. Brunton, J. Nathan Kutz

JMLR 2023 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We demonstrate the utility of NIF for parametric surrogate modeling, enabling the interpretable representation and compression of complex spatio-temporal dynamics, efficient many-spatial-query tasks, and improved generalization performance for sparse reconstruction. [...] 1. NIF generalizes 40% better in terms of root-mean-square error (RMSE) than a generic mesh-agnostic MLP [...] 2. NIF outperforms both (linear) SVD and (nonlinear) CAE in terms of nonlinear dimensionality reduction [...] 3. Compared with the original implicit neural representation [...] NIF enables efficient spatial sampling with 30% less CPU time and around 26% less memory consumption [...] 4. NIF outperforms the state-of-the-art method (POD-QDEIM [...] with 34% smaller testing error
Researcher Affiliation Academia Shaowu Pan EMAIL Department of Applied Mathematics University of Washington Seattle, WA 98195-4322, USA Steven L. Brunton EMAIL Department of Mechanical Engineering University of Washington Seattle, WA 98195-4322, USA J. Nathan Kutz EMAIL Department of Applied Mathematics University of Washington Seattle, WA 98195-4322, USA
Pseudocode No The paper describes methods and formulations using mathematical equations and descriptive text, such as in Section 2 "Neural Implicit Flow" and Section 2.1 "Data-fit parametric surrogate modeling for PDEs". However, it does not include any explicitly labeled "Pseudocode" or "Algorithm" blocks with structured steps.
Open Source Code Yes The code and data for the following applications is available at https://github.com/ pswpswpsw/paper-nif. The Python package for NIF is available at https://github.com/ pswpswpsw/nif.
Open Datasets Yes We use the forced isotropic turbulence dataset from JHU Turbulence dataset (Li et al., 2008) with Taylor-scale Reynolds number Reλ around 433. [...] We obtain the weekly averaged sea surface temperature data since 1990 to present from NOAA website 10. [...] https://downloads.psl.noaa.gov/Datasets/noaa.oisst.v2/sst.wkmean.1990-present.nc
Dataset Splits Yes The training data consists of 20 points in the parameter µ space (i.e., 20 simulations with distinct µ). The testing data consists of 59 simulations with a finer sampling of µ. [...] We sample the flowfield uniformly in time and split such single trajectories into 84 training and 28 testing snapshots in a way that the testing snapshots fall in between the training snapshots. [...] We take snapshots from 1990 to 2006 as training data and that of the next 15 years, until 2021, as testing data.
Hardware Specification Yes For all the results shown in this paper, we have used Nvidia Tesla P100 (16 GB), Nvidia Ge Force RTX 2080 GPU (12 GB), and Nvidia A6000 GPU (48 GB).
Software Dependencies No The paper mentions several software tools and frameworks such as "Tensorflow (Abadi et al., 2016)", "Adam optimizer (Kingma and Ba, 2014)", "L4 optimizer (Rolinek and Martius, 2018)", and the "scikit-image package (Van der Walt et al., 2014)". However, it does not specify any version numbers for these software components, which is necessary for reproducible setup.
Experiment Setup Yes For NIF, we take 4 layers with units for Parameter Net as 2-30-30-2-6553 and 5 layers with units 1-56-56-56-1 with Res Net-like skip connection for Shape Net. [...] The model parameters are initialized with a truncated normal with standard deviation of 0.1 [...] We adopt the Adam optimizer (Kingma and Ba, 2014) with a learning rate of 1e-3, batch size of 1024 and 40000 epochs. [...] The learning rate is 2e-5 for NIF and 1e-3 for CAE with a batch size of 3150 for NIF and 4 for CAE. The total learning epoch is 10,000 for CAE and 800 for NIF.