OpenBox: A Python Toolkit for Generalized Black-box Optimization

Authors: Huaijun Jiang, Yu Shen, Yang Li, Beicheng Xu, Sixian Du, Wentao Zhang, Ce Zhang, Bin Cui

JMLR 2024 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experimental results demonstrate the effectiveness and efficiency of Open Box over existing systems. We conduct experiments on the constrained multi-objective benchmark CONSTR and the Light GBM (Ke et al., 2017) tuning task on 24 Open ML datasets (Feurer et al., 2021). In Figure 3(a), we observe that Open Box outperforms the other baselines in terms of convergence speed and stability. In Figure 3(b), we observe that Open Box outperforms the other competitive systems, achieves a median rank of 1.25, and ranks first in 12 out of 24 datasets.
Researcher Affiliation Collaboration 1 Key Lab of High Confidence Software Technologies (MOE), School of CS, Peking University, China 2 Department of Data Platform, TEG, Tencent Inc., China 3 Department of Computer Science, ETH Z urich, Switzerland
Pseudocode No The paper provides an example code snippet to illustrate the usage of the Open Box API (Figure 2, right), but it does not present structured pseudocode or an algorithm block for the methodology described in the paper itself.
Open Source Code Yes The source code of Open Box is available at https://github.com/PKU-DAIR/open-box. Keywords: Python, Black-box Optimization, Bayesian Optimization, Hyper-parameter Optimization. The source code of Open Box is now available at https://github.com/PKU-DAIR/open-box.
Open Datasets Yes To demonstrate the generality and efficiency of Open Box, we conduct experiments on the constrained multi-objective benchmark CONSTR and the Light GBM (Ke et al., 2017) tuning task on 24 Open ML datasets (Feurer et al., 2021).
Dataset Splits No The paper mentions using "24 Open ML datasets" but does not provide specific details on how these datasets were split into training, validation, or test sets, nor does it refer to standard splits for these datasets.
Hardware Specification No The paper acknowledges support from the "High-performance Computing Platform of Peking University" but does not provide specific details about the hardware used for experiments, such as GPU/CPU models or memory amounts.
Software Dependencies No The paper states that Open Box "can be installed easily via Py PI by pip install openbox" and mentions "version 0.8.3". However, it does not provide specific version numbers for other key software components, such as the Python version or dependent libraries like PyTorch or Scikit-learn, which would be needed to reproduce the experimental environment.
Experiment Setup No The paper describes the experimental setup in terms of the benchmarks and tasks used (CONSTR, Light GBM tuning on Open ML datasets) and reports performance metrics. However, it does not provide specific hyperparameters, training configurations, or system-level settings (e.g., learning rates, batch sizes, optimizer details) for the experiments conducted.