Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1]
Functional Generalized Empirical Likelihood Estimation for Conditional Moment Restrictions
Authors: Heiner Kremer, Jia-Jie Zhu, Krikamol Muandet, Bernhard Schölkopf
ICML 2022 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Finally, we provide kerneland neural network-based implementations of the estimator, which achieve stateof-the-art empirical performance on two conditional moment restriction problems. |
| Researcher Affiliation | Academia | 1Max Planck Institute for Intelligent Systems, T ubingen, Germany 2Weierstrass Institute for Applied Analysis and Stochastics, Berlin, Germany. |
| Pseudocode | Yes | Algorithm 1 Kernel-FGEL Algorithm 2 Neural-FGEL |
| Open Source Code | Yes | Code for reproducing our experimental results is available at https://github. com/Heiner Kremer/Functional-GEL. |
| Open Datasets | No | The paper describes generating synthetic data for its experiments ('simple data generating process', 'modified version of the IV regression experiment of Lewis and Syrgkanis (2018)') but does not provide specific access information (e.g., URL, DOI, or a citation to a publicly available dataset with author/year) for these datasets or the generated data. |
| Dataset Splits | Yes | We use training and validation sets of size n = 2000 and evaluate the prediction error on a test set of 20000 samples. |
| Hardware Specification | No | The paper does not explicitly describe the hardware used for running its experiments, such as specific GPU or CPU models, or details about cloud computing resources. |
| Software Dependencies | No | The paper does not provide specific version numbers for any software dependencies used in the experiments (e.g., Python 3.x, PyTorch 1.x, or other libraries). |
| Experiment Setup | Yes | We approximate f0 by a shallow neural network fθ(x) with 2 layers of [20, 3] units and leaky Re LU activation functions... We use training and validation sets of size n = 2000 and evaluate the prediction error on a test set of 20000 samples. |