Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1]
How Well Generative Adversarial Networks Learn Distributions
Authors: Tengyuan Liang
JMLR 2021 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Theoretical | This paper studies the rates of convergence for learning distributions implicitly with the adversarial framework and Generative Adversarial Networks (GANs)... We derive the optimal minimax rates for distribution estimation under the adversarial framework. On the nonparametric end, we derive the optimal minimax rates for distribution estimation under the adversarial framework. On the parametric end, we establish a theory for general neural network classes (including deep leaky Re LU networks) that characterizes the interplay on the choice of generator and discriminator pair. We develop novel oracle inequalities as the main technical tools for analyzing GANs, which are of independent interest. |
| Researcher Affiliation | Academia | Tengyuan Liang EMAIL Econometrics and Statistics University of Chicago, Booth School of Business Chicago, IL 60637, USA |
| Pseudocode | No | The paper describes methods and procedures using mathematical notation and conceptual explanations, but it does not include any explicitly labeled 'Pseudocode' or 'Algorithm' blocks, nor does it present structured steps formatted like code. |
| Open Source Code | No | The paper does not contain any explicit statements about releasing source code, nor does it provide links to code repositories or supplementary materials containing code for the described methodology. |
| Open Datasets | No | The paper is theoretical and does not conduct experiments on specific datasets. While it discusses learning distributions and mentions "finite samples of the target distribution ν" in a general theoretical context, it does not refer to or provide access information for any specific publicly available datasets used for empirical validation. |
| Dataset Splits | No | The paper is theoretical and does not describe experimental evaluations involving specific datasets, therefore, it does not provide information on training/test/validation dataset splits. |
| Hardware Specification | No | The paper is theoretical and does not describe any experimental evaluations, therefore, no hardware specifications for running experiments are mentioned. |
| Software Dependencies | No | The paper is theoretical and does not describe any experimental evaluations, therefore, no specific software dependencies with version numbers are mentioned. |
| Experiment Setup | No | The paper is theoretical and does not describe any experimental evaluations. It does not provide specific details on experimental setup, hyperparameters, or system-level training settings. |