Re-Examining Linear Embeddings for High-Dimensional Bayesian Optimization

Authors: Ben Letham, Roberto Calandra, Akshara Rai, Eytan Bakshy

NeurIPS 2020 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We show empirically that properly addressing these issues significantly improves the efficacy of linear embeddings for BO on a range of problems, including learning a gait policy for robot locomotion.
Researcher Affiliation Industry Benjamin Letham Facebook Menlo Park, CA EMAIL Roberto Calandra Facebook AI Research Menlo Park, CA EMAIL Akshara Rai Facebook AI Research Menlo Park, CA EMAIL Eytan Bakshy Facebook Menlo Park, CA EMAIL
Pseudocode Yes Algorithm 1: ALEBO for linear embedding BO.
Open Source Code Yes Code to reproduce the results of this paper is available at github.com/facebookresearch/alebo.
Open Datasets Yes Constrained Neural Architecture Search We evaluated ALEBO performance on constrained neural architecture search (NAS) for convolutional neural networks using models from NAS-Bench-101 [53]. The NAS problem was to design a cell topology defined by a DAG with 7 nodes and up to 9 edges, which includes designs like Res Net [20] and Inception [47]. We created a D = 36 parameterization, producing a HDBO problem. The objective was to maximize CIFAR-10 test-set accuracy, subject to a constraint that training time was less than 30 mins; see Sec. S9 for full details.
Dataset Splits No No specific dataset split percentages or absolute sample counts for train/validation/test sets are provided for the main experiments, nor clear citations to predefined splits. The text mentions '100 training and 50 test points were randomly sampled' for a specific figure (Fig 3) but not for the overall experimental setup.
Hardware Specification No No specific hardware details (GPU/CPU models, memory, etc.) are provided for running the experiments.
Software Dependencies No The paper mentions Py Bullet [8] and Scipy's SLSQP but does not provide version numbers for these or other software dependencies.
Experiment Setup No While Algorithm 1 mentions `ninit` and `n BO`, their specific values for the experiments are not provided in the main text. Hyperparameters like learning rates, batch sizes, or optimizer settings are not detailed.