Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1]

Limitations on approximation by deep and shallow neural networks

Authors: Guergana Petrova, Przemyslaw Wojtaszczyk

JMLR 2023 | Venue PDF | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Theoretical We prove Carl s type inequalities for the error of approximation of compact sets K by deep and shallow neural networks. This in turn gives estimates from below on how well we can approximate the functions in K when requiring the approximants to come from outputs of such networks. Our results are obtained as a byproduct of the study of the recently introduced Lipschitz widths. ... In our analysis of neural network approximation (NNA), we are not concerned with the numerical aspect of the construction of the corresponding DNN or SNN and its stability, but rather with the theoretical bounds from below of the performance of such an approximation.
Researcher Affiliation Academia Guergana Petrova EMAIL Department of Mathematics Texas A&M University College Station, TX 77843, USA. Przemys law Wojtaszczyk EMAIL Institut of Mathematics Polish Academy of Sciences ul. Sniadeckich 8, 00-656 Warszawa, Poland
Pseudocode No The paper describes mathematical proofs and theoretical bounds. There are no sections or figures explicitly labeled as 'Pseudocode' or 'Algorithm', nor are there structured, step-by-step procedures formatted like code.
Open Source Code No The paper does not contain any explicit statements about releasing source code, nor does it provide links to code repositories or mention code in supplementary materials.
Open Datasets No The paper is theoretical and focuses on mathematical properties of 'compact sets K' and 'classes of functions.' It does not describe or use any experimental datasets, nor does it provide information about their availability.
Dataset Splits No As this is a theoretical paper focusing on mathematical bounds and properties, it does not involve experimental datasets or their splits for training, validation, or testing.
Hardware Specification No The paper is theoretical and does not report any experimental results that would require specific hardware for execution or training.
Software Dependencies No The paper is theoretical and does not describe any software implementations or their dependencies with version numbers.
Experiment Setup No This is a theoretical paper providing mathematical proofs and analysis; therefore, it does not describe an experimental setup, hyperparameters, or training configurations.