Dynamic Schwartz-Fourier Neural Operator for Enhanced Expressive Power
Authors: Wenhan Gao, Jian Luo, Ruichen Xu, Yi Liu
TMLR 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Through experiments, we demonstrate that DSFNOs can improve FNOs on a range of tasks, highlighting the effectiveness of our proposed approach. The code is available at https://github.com/wenhangao21/TMLR25_DSFNO. |
| Researcher Affiliation | Academia | Wenhan Gao EMAIL Stony Brook University Jian Luo EMAIL Stony Brook University Ruichen Xu EMAIL Stony Brook University Yi Liu EMAIL Stony Brook University |
| Pseudocode | No | The paper describes mathematical formulations and conceptual diagrams but does not contain explicit pseudocode or algorithm blocks. |
| Open Source Code | Yes | The code is available at https://github.com/wenhangao21/TMLR25_DSFNO. |
| Open Datasets | Yes | We follow the exact same setup as in Li et al. (2021) and directly present their autoregressive baseline results for reference... Further details about these experiments and the dataset are provided in Appendix F. We use the dataset provided by Takamoto et al. (2023), details of which can be found in the Appendix F.1.3. We use the data generator provided by Li et al. (2021). |
| Dataset Splits | Yes | Following the setups in Li et al. (2021), the training dataset contains 1000 and 10000 input-output function pairs for ν = 1e-3 and ν = 1e-4 respectively, whereas the testing dataset contains 200 input-output function pairs. |
| Hardware Specification | Yes | The training time per epoch is reported on NVIDIA Tesla V100 Volta GPU Accelerator 32GB Graphics Card. |
| Software Dependencies | No | The paper mentions using the Py Claw (Ketcheson et al., 2012) Python package for data generation, but it does not specify a version number for this or any other key software components used for the main experiments. |
| Experiment Setup | Yes | Table 7: Architectural hyper-parameters for FNO and DKFNO. For a fair comparison, they have the exact same hyper-parameters. Width Modes Fourier Layers Navier Stokes 20 12 4 Darcy Flow 32 12 4 Shallow Water 20 16 4. Table 8: Training hyper-parameters for all models. Hyper-parameter Value Learning Rate 0.001 for all except 0.0001 for GNO Weight Decay 1e-4 Scheduler Cosine Annealing LR Epochs 500 for all except 2000 for Deep ONet Batch Size 20 |