Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1]
An Exponential Tail Bound for the Deleted Estimate
Authors: Karim Abou–Moustafa, Csaba Szepesvári3143-3150
AAAI 2019 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Theoretical | Using recent advances in concentration inequalities, and using a notion of stability that is weaker than uniform stability but distribution dependent and amenable to computation, we derive an exponential tail bound for the concentration of the estimated risk of a hypothesis returned by a general learning rule, where the estimated risk is expressed in terms of the deleted estimate. |
| Researcher Affiliation | Collaboration | Dept. of Computing Sciecne, University of Alberta Edmonton, Alberta T6G 2E8, Canada EMAIL, EMAIL Currently with SAS Inst. Inc., Cary, North Carolina, USA Currently with Google Deep Mind, London, UK |
| Pseudocode | No | The paper is theoretical and mathematical, and it does not contain any pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not mention providing any open-source code for the described methodology. |
| Open Datasets | No | The paper is theoretical and does not use or mention any specific datasets for empirical evaluation. |
| Dataset Splits | No | The paper is theoretical and does not involve training, validation, or test data splits for experiments. |
| Hardware Specification | No | The paper is theoretical and does not mention any specific hardware used for running experiments. |
| Software Dependencies | No | The paper is theoretical and does not mention any specific software dependencies with version numbers. |
| Experiment Setup | No | The paper is theoretical and does not describe an experimental setup with hyperparameters or training settings. |