Distributionally Robust Active Learning for Gaussian Process Regression

Authors: Shion Takeno, Yoshito Okura, Yu Inatsu, Aoyama Tatsuya, Tomonari Tanaka, Akahane Satoshi, Hiroyuki Hanada, Noriaki Hashimoto, Taro Murayama, Hanju Lee, Shinya Kojima, Ichiro Takeuchi

ICML 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Finally, we demonstrate the effectiveness of the proposed methods through synthetic and real-world datasets. 6. Experiments In this section, we demonstrate the effectiveness of the proposed methods via synthetic and real-world datasets. We employ RS, US, variance reduction (Yu et al., 2006), and expected predictive information gain (EPIG) (Bickford Smith et al., 2023) as the baseline. [...] Figure 1 shows the result. [...] Figure 2 shows the result of the expected squared error ET in the real-world data experiments with η = 0, 0.001, 0.01, 0.1.
Researcher Affiliation Collaboration 1Department of Mechanical Engineering, Nagoya University, Aichi, Japan 2Department of Computer Science, Nagoya Institute of Technology, Aichi, Japan 3RIKEN AIP, Tokyo, Japan 4DENSO CORPORATION, Aichi, Japan.
Pseudocode Yes Algorithm 1 Proposed DRAL methods Require: Domain X, GP prior µ and k, ambiguity set P 1: D0 2: for t = 1, . . . , T do 3: Update σ2 t 1( ) according to Eq. (1) 4: Compute xt according to Eq. (3) or Eq. (4) 5: end for 6: Observe y1, . . . , y T 7: Update µT ( ) and σ2 T ( ) according to Eq. (1) 8: return µT ( ) and σ2 T ( )
Open Source Code No The paper does not contain any explicit statement about open-sourcing their code or provide a link to a code repository for the methodology described.
Open Datasets Yes 6.2. Real-World Dataset Experiments We use the King County house sales2, the red wine quality (Cortez & Reis, 2009), and the auto MPG datasets (Quinlan, 1993) (See Appendix C.3 for details). For all experiments, we used SE kernels, where the hyperparameters ℓ and σ2 are adaptively determined by the marginal likelihood maximization (Rasmussen & Williams, 2005) per 10 iterations. The first input is selected uniformly at random. Furthermore, we normalize the inputs and outputs of all datasets before the experiments and set pref = N(0, 0.3Id). 2https://www.kaggle.com/datasets/ harlfoxem/housesalesprediction
Dataset Splits No The paper mentions using a "random sample of 1000 data points" for King County and states that "The first input x1 is selected uniformly at random" for both synthetic and real-world experiments. However, it does not specify explicit training, validation, and test splits (e.g., 80/10/10 split or specific counts) for evaluation of the models.
Hardware Specification No The paper does not provide specific details about the hardware (e.g., CPU, GPU models, memory) used to run the experiments.
Software Dependencies No The paper mentions "CVXPY (Diamond & Boyd, 2016; Agrawal et al., 2018)" as a tool used, but it does not specify a version number for CVXPY or any other software dependencies.
Experiment Setup Yes 6.1. Synthetic Data Experiemnts We set X = { 1, 0.8, . . . , 1}3, where |X| = 113 = 1331. The target function f is the sample path from GPs, where we use SE and Matérn-ν kernels with ν = 5/2. We use the fixed hyperparameters of the kernel function in the GPR model, which is used to generate f, and fix σ2 = 10 4. The first input x1 is selected uniformly at random, and T is set to 400. Furthermore, we set pref = N(0, 0.2I3). 6.2. Real-World Dataset Experiments For all experiments, we used SE kernels, where the hyperparameters ℓ and σ2 are adaptively determined by the marginal likelihood maximization (Rasmussen & Williams, 2005) per 10 iterations. The first input is selected uniformly at random. Furthermore, we normalize the inputs and outputs of all datasets before the experiments and set pref = N(0, 0.3Id).