Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1]
Distributionally Robust Bayesian Optimization with $\varphi$-divergences
Authors: Hisham Husain, Vu Nguyen, Anton van den Hengel
NeurIPS 2023 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We then show experimentally that our method surpasses existing methods, attesting to the theoretical results. 5 Experiments Experimental setting. The experiments are repeated using 30 independent runs. |
| Researcher Affiliation | Industry | Hisham Husain Amazon EMAIL Vu Nguyen Amazon EMAIL Anton van den Hengel Amazon EMAIL |
| Pseudocode | Yes | Algorithm 1 DRBO with ฯ-divergence |
| Open Source Code | No | We will release the Python implementation code in the final version. |
| Open Datasets | Yes | We consider the popular benchmark functions3 with different dimensions d. ... We perform an experiment on Wind Power dataset [8] and vary the context dimensions |C| {30, 100, 500} in Fig. 4. |
| Dataset Splits | No | No specific dataset split information (exact percentages, sample counts, citations to predefined splits, or detailed splitting methodology) needed to reproduce the data partitioning was found. |
| Hardware Specification | No | No specific hardware details (exact GPU/CPU models, processor types with speeds, memory amounts, or detailed computer specifications) were found for running its experiments. |
| Software Dependencies | No | The paper mentions 'Python implementation code' but does not provide specific software dependencies or version numbers (e.g., library names with version numbers). |
| Experiment Setup | Yes | Experimental setting. The experiments are repeated using 30 independent runs. We set |C| = 30 which should be suf๏ฌcient to draw c iid q in one-dimensional space to compute Eqs. (4,5). We optimize the GP hyperparameter (e.g., learning rate) by maximizing the GP log marginal likelihood [43]. |