Global Safe Sequential Learning via Efficient Knowledge Transfer

Authors: Cen-You Li, Olaf Dünnbier, Marc Toussaint, Barbara Rakitsch, Christoph Zimmer

TMLR 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Our experiments demonstrate that this approach, compared to state-of-the-art methods, learns tasks with lower data consumption and enhances global exploration across multiple disjoint safe regions, while maintaining comparable computational efficiency. (...) In this section, we empirically evaluate our approach against state-of-the-art competitors on a range of synthetic and real-world datasets.
Researcher Affiliation Collaboration Cen-You Li EMAIL Technical University of Berlin, Germany Bosch Center for Artificial Intelligence, Germany
Pseudocode Yes Algorithm 1 Safe AL (...) Algorithm 2 Full safe transfer AL (...) Algorithm 3 Modularized safe transfer AL
Open Source Code Yes Our code is available at https://github.com/cenyou/Transfer Safe Sequential Learning.
Open Datasets Yes Our Git Hub repository provides a link to the dataset. (for PEngine and GEngine datasets, Section 6.1.2) (...) For the Branin dataset, we follow the settings from Rothfuss et al. (2022); Tighineanu et al. (2022) to produce five datasets...
Dataset Splits Yes PEngine: During the training, we split each of the datasets (both safe and unsafe) into 60% training data and 40% test data. (...) GEngine: The original dataset are split into training and test sets, and we conduct AL experiments on the training sets, while RMSE and safe set performance are evaluated on the target test set.
Hardware Specification No This work was supported by Bosch Center for Artificial Intelligence, which provided finacial support, computers and GPU clusters. The Bosch Group is carbon neutral. Administration, manufacturing and research activities do no longer leave a carbon footprint. This also includes GPU clusters on which the experiments have been performed.
Software Dependencies No The paper mentions Gaussian processes (GPs) and makes references to PyTorch in the context of other works. However, it does not explicitly list specific software dependencies with version numbers for its own implementation in the main text.
Experiment Setup Yes For the safety tolerance, we always fix β = 4, which corresponds to α = 1 Φ(β1/2) = 0.02275 (Equations (3) and (8)), implying that each fitted GP safety model allows 2.275% unsafe tolerance. (...) All methods use Matérn-5/2 kernels as the base kernels. (...) In this problem, the effect of one single query on the GP hyperparameters is not obvious. Therefore, to speed up the experiments, we train the hyperparameters only every 50 queries (and at the beginning).