Kernel Learning for Sample Constrained Black-Box Optimization

Authors: Rajalaxmi Rajagopalan, Yu-Lin Wei, Romit Roy Choudhury

AAAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Results show that the proposed method, Kernel Optimized Blackbox Optimization (KOBO), outperforms state of the art by estimating the optimal at considerably lower sample budgets. Results hold not only across synthetic benchmark functions but also in real applications. We show that a hearing aid may be personalized with fewer audio queries to the user, or a generative model could converge to desirable images from limited user ratings. Experiments are reported across synthetic benchmark functions and from real-world audio experiments with U=6 users.
Researcher Affiliation Academia Rajalaxmi Rajagopalan, Yu-Lin Wei, Romit Roy Choudhury Department of Electrical & Computer Engineering University of Illinois Urbana-Champaign EMAIL
Pseudocode No The paper describes the methodology in prose and uses diagrams (e.g., Figure 3) to illustrate the system flow, but it does not contain explicit pseudocode or algorithm blocks.
Open Source Code No The audio demos at various stages of the optimization is made public: https://keroptbo.github.io/. There is no explicit statement about the release of the source code for the described methodology.
Open Datasets Yes We report results from 3 types of synthetic functions f(x) that are popular benchmarks (Kim 2020) for black-box optimization: Staircase functions in N = 2000 dimensions; they exhibit non-smooth structures (Al-Roomi 2015). Smooth benchmark functions such as BRANIN commonly used in Bayesian optimization research (Sonja Surjanovic 2013). Periodic functions such as MICHALEWICZ that exhibit repetitions in their shape (Sonja Surjanovic 2013). Learning Real-world CO2 Emission Data: Figure 6 s blue curve plots real CO2 emissions data over time (Thoning, Tans, and Komhyr 1989). We apply KOBO to audio personalization for real volunteers... corrupted audio played to the user with the aim of helping the user pick a filter h that cancels the effect of the corruption equalization... with a hearing loss audiogram (CDC 2011).
Dataset Splits Yes Figure 6(a,b,c) show results when KOBO has observed the first 20%, 40%, and 60% of the data, respectively. Table 3 displays xstart and the images recommended after Q = 5, 15, 25 queries.
Hardware Specification No The paper does not provide any specific hardware details such as GPU or CPU models used for running its experiments.
Software Dependencies No The paper mentions general techniques like Gaussian Process Regression (GPR) and Variational Autoencoders (VAE), but it does not provide specific version numbers for any software dependencies used in its own experiments or implementation.
Experiment Setup No The paper states that 'All KOBO experiments are initialized with the SE kernel.' However, it does not provide specific details such as hyperparameters for VAE training, learning rates, batch sizes, or other training configurations for the experimental setup in the main text.