Adaptive Learn-then-Test: Statistically Valid and Efficient Hyperparameter Selection
Authors: Matteo Zecchin, Sangwoo Park, Osvaldo Simeone
ICML 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We study two practical scenarios requiring hyperparameter selection, namely online policy selection for offline reinforcement learning (Fujimoto & Gu, 2021) and automated prompt engineering (Zhou et al., 2023). In both cases, a LTT is shown to deliver reliable and effective hyperparameters using only a small fraction of the testing rounds required... In Figure 2, we compare the TPR of LTT and a LTT as a function of the calibration round t. We target FWER control on the left and FDR control on the right. |
| Researcher Affiliation | Academia | 1Centre for Intelligent Information Processing Systems, Department of Engineering, King s College London, London, United Kingdom. Correspondence to: Sangwoo Park, Matteo Zecchin <EMAIL, EMAIL>. |
| Pseudocode | Yes | Algorithm 1 Adaptive Learn-Then-Test (a LTT) |
| Open Source Code | No | The paper does not provide concrete access to the source code for the methodology. It only mentions: "A list of the tested prompts together with their accuracy levels can be found at: https://github.com/kclip/a LTT." This link appears to be for data or prompts, not explicitly the implementation code for the aLTT method itself. |
| Open Datasets | Yes | In our experiments, we consider the Half Cheetah control problem from the Open AI Gym Mu Jo Co tasks (Todorov et al., 2012)... Focusing on tasks from the instruction induction data set (Honovich et al., 2022) |
| Dataset Splits | No | The paper mentions using "held-out data" or "real-world testing" and evaluating on "test datum," but does not provide specific details on how the datasets were split into training, validation, or test sets (e.g., percentages, sample counts, or references to predefined splits). |
| Hardware Specification | No | The paper does not provide specific hardware details such as GPU models, CPU types, or memory specifications used for running the experiments. |
| Software Dependencies | No | The paper mentions several tools and models used in the applications, such as "Open AI Gym Mu Jo Co tasks" and "Llama3 8B Instruct LLM," but it does not specify software dependencies (e.g., programming languages, libraries, or frameworks) with version numbers that would be required to reproduce the methodology itself. |
| Experiment Setup | Yes | Unless stated otherwise, we consider a target reliability α = 0.57 and a target FDR requirement δ = 0.1. We evaluate a LTT with an ϵ-greedy acquisition policy Qt that, at every calibration round t, with probability 1 ϵ, selects the hyperparameter λt i not included in ˆΛa LTT,t that is associated with the largest e-process value; otherwise, it picks uniformly at random a hyperparameter not in ˆΛa LTT,t. For reference, we also consider a LTT with a non-adaptive acquisition policy that, at each round t, picks uniformly at random the hyperparameter to be tested regardless of the prediction outcome ˆΛa LTT,t and the e-process values. Finally, the value of the parameter µt i in a LTT is set by following the approximate growth rate adaptive to the particular alternative (a GRAPA) betting strategy in (Waudby Smith & Ramdas, 2024) with other adaptive and non-adaptive betting strategies evaluated in the Supplementary Material. |