Deep Nonparametric Estimation of Operators between Infinite Dimensional Spaces

Authors: Hao Liu, Haizhao Yang, Minshuo Chen, Tuo Zhao, Wenjing Liao

JMLR 2024 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Theoretical This paper studies the nonparametric estimation of Lipschitz operators using deep neural networks. Non-asymptotic upper bounds are derived for the generalization error of the empirical risk minimizer over a properly chosen network class... Our contributions are summarized as follows: 1. We derive an upper bound on the generalization error... The proofs of all results are given in Section 7. We conclude this paper in Section 8.
Researcher Affiliation Academia 1 Department of Mathematics, Hong Kong Baptist University, Hong Kong 2 Department of Mathematics and Department of Computer Science, University of Maryland College 3 Department of Electrical and Computer Engineering, Princeton University, USA 4 School of Industrial and Systems Engineering, Georgia Institute of Technology, USA 5 School of Mathematics, Georgia Institute of Technology, USA
Pseudocode No The paper describes methods and theoretical derivations in prose and mathematical notation but does not contain any structured pseudocode or algorithm blocks.
Open Source Code No The paper does not provide concrete access to source code for the methodology described. It only mentions the license for the paper itself: 'License: CC-BY 4.0, see https://creativecommons.org/licenses/by/4.0/. Attribution requirements are provided at http://jmlr.org/papers/v25/22-0719.html.'
Open Datasets No The paper is theoretical and analyzes statistical properties and error bounds for learning operators. It discusses 'training sample size' and 'samples' in a theoretical context but does not use or refer to any specific named publicly available datasets for experimental evaluation.
Dataset Splits No The paper describes a theoretical data splitting strategy for its analysis: 'Given the training data S = {ui, vi}2n i=1, we split the data into two subsets S1 = {ui, vi}n i=1 and S2 = {ui, vi}2n i=n+1 1, where S1 is used to compute the encoders and decoders and S2 is used to learn the transformation Γ between the encoded vectors.' However, this is part of the theoretical framework and not a specification for reproducing experiments with real datasets.
Hardware Specification No This is a theoretical paper. There are no experiments conducted that would require specific hardware, and thus no hardware specifications are mentioned.
Software Dependencies No This is a theoretical paper. There are no experiments conducted that would require specific software, and thus no software dependencies with version numbers are mentioned.
Experiment Setup No This is a theoretical paper. No experiments are conducted, and therefore no experimental setup details, hyperparameters, or training configurations are provided.