A Theoretical Framework for Zeroth-Order Budget Convex Optimization

Authors: François Bachoc, Tommaso Cesari, Roberto Colomboni, Andrea Paudice

TMLR 2024 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Finally, we present illustrative experiments supporting our theory in the univariate stochastic and uni/multivariate budget settings.
Researcher Affiliation Academia François Bachoc EMAIL Institut de Mathématiques de Toulouse Université Paul Sabatier Institut universitaire de France (IUF) Tommaso Cesari EMAIL School of Electrical Engineering and Computer Science University of Ottawa Roberto Colomboni EMAIL Università degli Studi di Milano Politecnico di Milano Andrea Paudice EMAIL Department of Computer Science Aarhus University (AU)
Pseudocode Yes Optimization Protocol 1 input: A non-empty bounded convex set I Rd (the domain of the unknown objective f) 1: for t = 1,2,... do 2: The optimizer selects a query point Xt I where to invest the next budget 3: The environment picks and reveals budget bt > 0 and an interval Jt R such that f(Xt) Jt 4: The optimizer recommends a point Rt I
Open Source Code No The paper does not provide concrete access to source code for the methodology described. It does not contain repository links, explicit statements of code release, or mention code in supplementary materials.
Open Datasets No The experiments in the paper are conducted on synthetic functions, such as f(x) = 1/2x^2 and f(x) = 1/2 ||x||^2_2. It does not mention or provide access information for any publicly available or open datasets.
Dataset Splits No The paper conducts experiments on synthetic functions, therefore, traditional dataset splits (training/test/validation) are not applicable or mentioned. No specific split information is provided for any dataset.
Hardware Specification No The paper mentions evaluating the runtime of the algorithm against problem dimension and states that "the execution for d = 100 takes less than a second", but it does not provide any specific hardware details such as CPU, GPU models, or memory configurations used for the experiments.
Software Dependencies No The paper describes algorithms and theoretical guarantees, and some experiments, but it does not specify any software dependencies with version numbers (e.g., Python, specific libraries, or solvers).
Experiment Setup Yes In addition to using the threshold ηT recommended by the tuning in appendix D and the corresponding theoretical upper bound, we also run algorithm 5 using η {1,10 1,10 2} for T {102,103,...,106}. We repeat each run 10 times and report the average optimization error and the standard deviation (note the high concentration: paths have little to no variance).