Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1]

Recursive Quantile Estimation: Non-Asymptotic Confidence Bounds

Authors: Likai Chen, Georg Keilbar, Wei Biao Wu

JMLR 2023 | Venue PDF | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Figure 2 visualizes the simulation results for the tail probabilities for the averaged algorithm. The simulation confirms the statements of exponential decay bounds, and the asymptotic algebraic bounds (7) and (13) can be too crude. We analyze the performance of the algorithms in a short simulation study. We set p = 16 and τ = 0.5, the data is drawn i.i.d. from a t-distribution with 10 degrees of freedom. For the SGD estimation, we set β = 0.7 and cγ = 1, and the algorithm is initialized with a random draw from a uniform distribution on [ 1, 1]. We consider three settings. ... Table 1 shows the performance of the three algorithms in terms of the regret for different choices for the budget n, based on 1000 Monte-Carlo iterations.
Researcher Affiliation Academia Likai Chen EMAIL Department of Mathematics and Statistics Washington University in St.Louis St.Louis, MO, USA Georg Keilbar EMAIL Department of Statistics and Operations Research University of Vienna Vienna, Austria Wei Biao Wu EMAIL Department of Statistics University of Chicago Chicago, IL, USA
Pseudocode Yes Algorithm 1: Uniform Exploration Algorithm for Quantiles ... Algorithm 2: Successive Reject Algorithm for Quantiles ... Algorithm 3: Sequential Halving Algorithm for Quantiles
Open Source Code No The paper does not provide explicit statements about releasing source code or links to a code repository. The license mentioned applies to the paper itself, not the code.
Open Datasets No The data is drawn i.i.d. from a t-distribution with 10 degrees of freedom, τ = 0.5, while the SGD algorithm is initialized with a random draw from a uniform distribution on [ 1, 1] and β = 0.7.
Dataset Splits No The paper describes a generative process for data (
Hardware Specification No The paper mentions running simulations and Monte-Carlo iterations but does not specify any hardware (CPU, GPU, etc.) used for these computations.
Software Dependencies No The paper describes algorithms and simulations but does not specify any particular software, libraries, or their version numbers used in the implementation.
Experiment Setup Yes We set p = 16 and τ = 0.5, the data is drawn i.i.d. from a t-distribution with 10 degrees of freedom. For the SGD estimation, we set β = 0.7 and cγ = 1, and the algorithm is initialized with a random draw from a uniform distribution on [ 1, 1].