Advection Augmented Convolutional Neural Networks

Authors: Niloufar Zakariaei, Siddharth Rout, Eldad Haber, Moshe Eliasof

NeurIPS 2024 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We demonstrate the effectiveness of our network on a number of spatio-temporal datasets that show their merit.
Researcher Affiliation Academia Niloufar Zakariaei University of British Columbia Vancouver, Canada EMAIL Siddharth Rout University of British Columbia Vancouver, Canada EMAIL Eldad Haber University of British Columbia Vancouver, Canada EMAIL Moshe Eliasof University of Cambridge Cambridge, United Kingdom EMAIL
Pseudocode Yes Algorithm 1 The ADR network
Open Source Code Yes Our code is available at https://github.com/Siddharth-Rout/deep ADRnet.
Open Datasets Yes We use two such datasets, Cloud Cast [70], and the Shallow Water Equation in PDEbench [51]. Moving MNIST. The Moving MNIST dataset is a synthetic video dataset designed to test sequence prediction models. KITTI. The KITTI is a widely recognized dataset extensively used in mobile robotics and autonomous driving, and it also serves as a benchmark for computer vision algorithms. Taxi BJ [70] and KTH [45].
Dataset Splits No Table 1: Datasets statistics. Training and testing splits, image sequences, and resolutions. Table 1 and Table 7 list Ntrain and Ntest for each dataset, but do not provide explicit details about a separate validation split, its size, or how it was derived.
Hardware Specification Yes We run our codes using a single NVIDIA RTX-A6000 GPU with 48GB of memory. All our experiments are conducted using an NVIDIA RTX-A6000 GPU with 48GB of memory.
Software Dependencies No The advection term is implemented by using the sample Grid command in Py Torch [41]. While PyTorch is mentioned, no specific version number is provided for it or any other software dependency to ensure reproducibility.
Experiment Setup Yes Table 16: Neural Network Hyperparameters [for] ADRNet Training on PDEBench-SWE, detailing Learning Rate, Batch Size, Number of Epochs, Optimizer, Number of Layers, Hidden Channels, and Activation Function. Similar details are provided in Table 17 for other datasets.