Fourier Neural Operators for Arbitrary Resolution Climate Data Downscaling

Authors: Qidong Yang, Alex Hernandez-Garcia, Paula Harder, Venkatesh Ramesh, Prasanna Sattigeri, Daniela Szwarcman, Campbell D. Watson, David Rolnick

JMLR 2024 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In this work, we propose a downscaling method based on the Fourier neural operator. It is trained using a low upsampling factor and then can zeroshot (without additional training) downscale its input to arbitrary unseen high resolution. Evaluated both on ERA5 climate model data and on the Navier-Stokes equation solution data, our downscaling model significantly outperforms state-of-the-art convolutional and generative adversarial downscaling models, both in standard single-resolution downscaling and in zero-shot generalization to higher upsampling factors. Furthermore, we show that our method also outperforms state-of-the-art data-driven partial differential equation solvers on Navier-Stokes equations. Overall, our work bridges the gap between simulation of a ... We evaluate our FNO downscaling model in three experiments: PDE integration, PDE solution downscaling and observational climate quantity downscaling. The PDE involved in the first two experiments is the Navier-Stokes equations... The observational climate quantity used in this work is the total column water content which we derived from the climate reanalysis data base ERA5 (Hersbach et al., 2020).
Researcher Affiliation Collaboration Qidong Yang EMAIL Mila Quebec AI Institute, Montreal, Canada New York University, New York, USA Alex Hernandez-Garcia Mila Quebec AI Institute, Montreal, Canada University of Montreal, Montreal, Canada Paula Harder Fraunhofer ITWM, Kaiserslautern, Germany Mila Quebec AI Institute, Montreal, Canada Venkatesh Ramesh Mila Quebec AI Institute, Montreal, Canada University of Montreal, Montreal, Canada Prasanna Sattigeri IBM Research, New York, USA Daniela Szwarcman IBM Research, Brazil Campbell D. Watson IBM Research, New York, USA David Rolnick Mila Quebec AI Institute, Montreal, Canada Mc Gill University, Montreal, Canada
Pseudocode No The paper describes the methodology using mathematical formulations and architectural diagrams (Figure 1) but does not contain a clearly labeled pseudocode block or algorithm.
Open Source Code No The paper does not contain any explicit statement about releasing code for the methodology described, nor does it provide a link to a code repository. The text mentions using architectures inspired by other works (e.g., SRGAN, FNO) but not the authors' implementation for this specific work.
Open Datasets Yes Evaluated both on ERA5 climate model data and on the Navier-Stokes equation solution data, our downscaling model significantly outperforms state-of-the-art convolutional and generative adversarial downscaling models, both in standard single-resolution downscaling and in zero-shot generalization to higher upsampling factors... The observational climate quantity used in this work is the total column water content which we derived from the climate reanalysis data base ERA5 (Hersbach et al., 2020). Climate downscaling models are generally applied to PDE based climate simulation as a post-processing tool to cheaply generate high-resolution simulation from a fast-running low-resolution numerical climate simulation model. Our FNO downscaling model fits this application well since smooth simulation data have a succinct representation in the Fourier basis, making it easier to be modeled by FNO with a truncated Fourier series. Evaluation on ERA5 water content data intends to examine to what extent our model can capture less smooth and noisy observational data. ... we used a dataset solving the 2D Navier-Stokes equation for a viscous and incompressible fluid in vorticity form (Li et al., 2021, Section 5.3).
Dataset Splits Yes Out of 10000 solutions, 7000, 2000, and 1000 solutions were sampled as train, validation, and test sets, respectively. ... From these, 40,000 patches are randomly sampled for training and 10,000 for each validation and testing.
Hardware Specification No This work was supported in part by the Québec Ministère de l Économie et de l Innovation, IBM, and the Canada CIFAR AI Chairs Program. The authors also acknowledge material support from NVIDIA in the form of computational resources, and are grateful for technical support from the Mila IDT team in maintaining the Mila Compute Cluster.
Software Dependencies No The paper mentions using Matlab for numerical solutions and various neural network architectures (CNN, GAN, Swin Transformer, FNO). It does not provide specific version numbers for any software libraries, frameworks, or Matlab itself.
Experiment Setup No The paper mentions training the DSFNO model on 2 times downscaling data and evaluating it at different upsampling factors. It also discusses the application of a softmax constraint layer. However, it does not explicitly detail specific hyperparameters such as learning rate, batch size, number of epochs, or optimizer settings for its models.